251
|
Borovec J, Kybic J, Arganda-Carreras I, Sorokin DV, Bueno G, Khvostikov AV, Bakas S, Chang EIC, Heldmann S, Kartasalo K, Latonen L, Lotz J, Noga M, Pati S, Punithakumar K, Ruusuvuori P, Skalski A, Tahmasebi N, Valkonen M, Venet L, Wang Y, Weiss N, Wodzinski M, Xiang Y, Xu Y, Yan Y, Yushkevich P, Zhao S, Munoz-Barrutia A. ANHIR: Automatic Non-Rigid Histological Image Registration Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3042-3052. [PMID: 32275587 PMCID: PMC7584382 DOI: 10.1109/tmi.2020.2986331] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Automatic Non-rigid Histological Image Registration (ANHIR) challenge was organized to compare the performance of image registration algorithms on several kinds of microscopy histology images in a fair and independent manner. We have assembled 8 datasets, containing 355 images with 18 different stains, resulting in 481 image pairs to be registered. Registration accuracy was evaluated using manually placed landmarks. In total, 256 teams registered for the challenge, 10 submitted the results, and 6 participated in the workshop. Here, we present the results of 7 well-performing methods from the challenge together with 6 well-known existing methods. The best methods used coarse but robust initial alignment, followed by non-rigid registration, used multiresolution, and were carefully tuned for the data at hand. They outperformed off-the-shelf methods, mostly by being more robust. The best methods could successfully register over 98% of all landmarks and their mean landmark registration accuracy (TRE) was 0.44% of the image diagonal. The challenge remains open to submissions and all images are available for download.
Collapse
|
252
|
Lange FJ, Ashburner J, Smith SM, Andersson JLR. A Symmetric Prior for the Regularisation of Elastic Deformations: Improved anatomical plausibility in nonlinear image registration. Neuroimage 2020; 219:116962. [PMID: 32497785 PMCID: PMC7610794 DOI: 10.1016/j.neuroimage.2020.116962] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 05/08/2020] [Accepted: 05/14/2020] [Indexed: 12/22/2022] Open
Abstract
Nonlinear registration is critical to many aspects
of Neuroimaging research. It facilitates averaging and comparisons across
multiple subjects, as well as reporting of data in a common anatomical frame of
reference. It is, however, a fundamentally ill-posed problem, with many possible
solutions which minimise a given dissimilarity metric equally well. We present a
regularisation method capable of selectively driving solutions towards those
which would be considered anatomically plausible by
penalising unlikely lineal, areal and volumetric deformations. This penalty is
symmetric in the sense that geometric expansions and contractions are penalised
equally, which encourages inverse-consistency. We demonstrate that this method
is able to significantly reduce local volume changes and shape distortions
compared to state-of-the-art elastic (FNIRT) and plastic (ANTs) registration
frameworks. Crucially, this is achieved whilst simultaneously matching or
exceeding the registration quality of these methods, as measured by overlap
scores of labelled cortical regions. Extensive leveraging of GPU parallelisation
has allowed us to solve this highly computationally intensive optimisation
problem while maintaining reasonable run times of under half an
hour.
Collapse
Affiliation(s)
- Frederik J Lange
- Centre for Functional MRI of the Brain (FMRIB), Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, John Radcliffe Hospital, Headley Way, Oxford, OX3 9DU, UK.
| | - John Ashburner
- Wellcome Centre for Human Neuroimaging, UCL Institute of Neurology, University College London, 12 Queen Square, London, WC1N 3BG, UK.
| | - Stephen M Smith
- Centre for Functional MRI of the Brain (FMRIB), Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, John Radcliffe Hospital, Headley Way, Oxford, OX3 9DU, UK.
| | - Jesper L R Andersson
- Centre for Functional MRI of the Brain (FMRIB), Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, John Radcliffe Hospital, Headley Way, Oxford, OX3 9DU, UK.
| |
Collapse
|
253
|
Ma Y, Lu C, Xiong K, Zhang W, Yang S. Spatial weight matrix in dimensionality reduction reconstruction for micro-electromechanical system-based photoacoustic microscopy. Vis Comput Ind Biomed Art 2020; 3:22. [PMID: 32996016 PMCID: PMC7524599 DOI: 10.1186/s42492-020-00058-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 09/04/2020] [Indexed: 12/23/2022] Open
Abstract
A micro-electromechanical system (MEMS) scanning mirror accelerates the raster scanning of optical-resolution photoacoustic microscopy (OR-PAM). However, the nonlinear tilt angular-voltage characteristic of a MEMS mirror introduces distortion into the maximum back-projection image. Moreover, the size of the airy disk, ultrasonic sensor properties, and thermal effects decrease the resolution. Thus, in this study, we proposed a spatial weight matrix (SWM) with a dimensionality reduction for image reconstruction. The three-layer SWM contains the invariable information of the system, which includes a spatial dependent distortion correction and 3D deconvolution. We employed an ordinal-valued Markov random field and the Harris Stephen algorithm, as well as a modified delay-and-sum method during a time reversal. The results from the experiments and a quantitative analysis demonstrate that images can be effectively reconstructed using an SWM; this is also true for severely distorted images. The index of the mutual information between the reference images and registered images was 70.33 times higher than the initial index, on average. Moreover, the peak signal-to-noise ratio was increased by 17.08% after 3D deconvolution. This accomplishment offers a practical approach to image reconstruction and a promising method to achieve a real-time distortion correction for MEMS-based OR-PAM.
Collapse
Affiliation(s)
- Yuanzheng Ma
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China.,Guangdong Provincial Key Laboratory of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China
| | - Chang Lu
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China.,Guangdong Provincial Key Laboratory of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China
| | - Kedi Xiong
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China.,Guangdong Provincial Key Laboratory of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China
| | - Wuyu Zhang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China.,Guangdong Provincial Key Laboratory of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China
| | - Sihua Yang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China. .,Guangdong Provincial Key Laboratory of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China.
| |
Collapse
|
254
|
Balanced multi-image demons for non-rigid registration of magnetic resonance images. Magn Reson Imaging 2020; 74:128-138. [PMID: 32966850 DOI: 10.1016/j.mri.2020.09.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 08/26/2020] [Accepted: 09/14/2020] [Indexed: 11/23/2022]
Abstract
A new approach is introduced for non-rigid registration of a pair of magnetic resonance images (MRI). It is a generalization of the demons algorithm with low computational cost, based on local information augmentation (by integrating multiple images) and balanced implementation. Specifically, a single deformation that best registers more pairs of images is estimated. All these images are extracted by applying different operators to the two original ones, processing local neighbors of each pixel. The following five images were found to be appropriate for MRI registration: the raw image and those obtained by contrast-limited adaptive histogram equalization, local median, local entropy and phase symmetry. Thus, each local point in the images is supplemented by augmented information coming by processing its neighbor. Moreover, image pairs are processed in alternation for each iteration of the algorithm (in a balanced way), computing both a forward and a backward registration. The new method (called balanced multi-image demons) is tested on sagittal MRIs from 10 patients, both in simulated and experimental conditions, improving the performances over the classical demons approach with minimal increase of the computational cost (processing time around twice that of standard demons). Specifically, a simulated deformation was applied to the MRIs (either original or corrupted by additive Gaussian or speckle noises). In all tested cases, the new algorithm improved the estimation of the simulated deformation (squared estimation error decreased by about 65% in the average). Moreover, statistically significant improvements were obtained in experimental tests, in which different brain regions (i.e., brain, posterior fossa and cerebellum) were identified by the atlas approach and compared to those manually delineated (in the average, Dice coefficient increased of about 6%). The conclusion is that a balanced method applied to multiple information extracted from neighboring pixels is a low cost approach to improve registration of MRIs.
Collapse
|
255
|
Merjulah R, Chandra J. An Integrated Segmentation Techniques for Myocardial Ischemia. PATTERN RECOGNITION AND IMAGE ANALYSIS 2020. [DOI: 10.1134/s1054661820030190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
256
|
Zou B, He Z, Zhao R, Zhu C, Liao W, Li S. Non-rigid retinal image registration using an unsupervised structure-driven regression network. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.04.122] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
257
|
Zachariadis O, Teatini A, Satpute N, Gómez-Luna J, Mutlu O, Elle OJ, Olivares J. Accelerating B-spline interpolation on GPUs: Application to medical image registration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 193:105431. [PMID: 32283385 DOI: 10.1016/j.cmpb.2020.105431] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 02/14/2020] [Accepted: 03/02/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE B-spline interpolation (BSI) is a popular technique in the context of medical imaging due to its adaptability and robustness in 3D object modeling. A field that utilizes BSI is Image Guided Surgery (IGS). IGS provides navigation using medical images, which can be segmented and reconstructed into 3D models, often through BSI. Image registration tasks also use BSI to transform medical imaging data collected before the surgery and intra-operative data collected during the surgery into a common coordinate space. However, such IGS tasks are computationally demanding, especially when applied to 3D medical images, due to the complexity and amount of data involved. Therefore, optimization of IGS algorithms is greatly desirable, for example, to perform image registration tasks intra-operatively and to enable real-time applications. A traditional CPU does not have sufficient computing power to achieve these goals and, thus, it is preferable to rely on GPUs. In this paper, we introduce a novel GPU implementation of BSI to accelerate the calculation of the deformation field in non-rigid image registration algorithms. METHODS Our BSI implementation on GPUs minimizes the data that needs to be moved between memory and processing cores during loading of the input grid, and leverages the large on-chip GPU register file for reuse of input values. Moreover, we re-formulate our method as trilinear interpolations to reduce computational complexity and increase accuracy. To provide pre-clinical validation of our method and demonstrate its benefits in medical applications, we integrate our improved BSI into a registration workflow for compensation of liver deformation (caused by pneumoperitoneum, i.e., inflation of the abdomen) and evaluate its performance. RESULTS Our approach improves the performance of BSI by an average of 6.5× and interpolation accuracy by 2× compared to three state-of-the-art GPU implementations. Through pre-clinical validation, we demonstrate that our optimized interpolation accelerates a non-rigid image registration algorithm, which is based on the Free Form Deformation (FFD) method, by up to 34%. CONCLUSION Our study shows that we can achieve significant performance and accuracy gains with our novel parallelization scheme that makes effective use of the GPU resources. We show that our method improves the performance of real medical imaging registration applications used in practice today.
Collapse
Affiliation(s)
- Orestis Zachariadis
- Department of Electronics and Computer Engineering, Universidad de Cordoba, Córdoba, Spain.
| | - Andrea Teatini
- The Intervention Centre, Oslo University Hospital - Rikshospitalet, Oslo, Norway; Department of Informatics, University of Oslo, Oslo, Norway.
| | - Nitin Satpute
- Department of Electronics and Computer Engineering, Universidad de Cordoba, Córdoba, Spain
| | - Juan Gómez-Luna
- Department of Computer Science, ETH Zurich, Zurich, Switzerland
| | - Onur Mutlu
- Department of Computer Science, ETH Zurich, Zurich, Switzerland
| | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital - Rikshospitalet, Oslo, Norway; Department of Informatics, University of Oslo, Oslo, Norway
| | - Joaquín Olivares
- Department of Electronics and Computer Engineering, Universidad de Cordoba, Córdoba, Spain
| |
Collapse
|
258
|
Wen Y, Xu C, Lu Y, Li Q, Cai H, He L. Gabor Feature Based LogDemons with Inertial Constraint for Nonrigid Image Registration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; PP:8238-8250. [PMID: 32755862 DOI: 10.1109/tip.2020.3013169] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Nonrigid image registration plays an important role in the field of computer vision and medical application. The methods based on Demons algorithm for image registration usually use intensity difference as similarity criteria. However, intensity based methods can not preserve image texture details well and are limited by local minima. In order to solve these problems, we propose a Gabor feature based LogDemons registration method in this paper, called GFDemons. We extract Gabor features of the registered images to construct feature similarity metric since Gabor filters are suitable to extract image texture information. Furthermore, because of the weak gradients in some image regions, the update fields are too small to transform the moving image to the fixed image correctly. In order to compensate this deficiency, we propose an inertial constraint strategy based on GFDemons, named IGFDemons, using the previous update fields to provide guided information for the current update field. The inertial constraint strategy can further improve the performance of the proposed method in terms of accuracy and convergence. We conduct experiments on three different types of images and the results demonstrate that the proposed methods achieve better performance than some popular methods.
Collapse
|
259
|
Ma J, Jiang X, Fan A, Jiang J, Yan J. Image Matching from Handcrafted to Deep Features: A Survey. Int J Comput Vis 2020. [DOI: 10.1007/s11263-020-01359-2] [Citation(s) in RCA: 230] [Impact Index Per Article: 46.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
AbstractAs a fundamental and critical task in various visual applications, image matching can identify then correspond the same or similar structure/content from two or more images. Over the past decades, growing amount and diversity of methods have been proposed for image matching, particularly with the development of deep learning techniques over the recent years. However, it may leave several open questions about which method would be a suitable choice for specific applications with respect to different scenarios and task requirements and how to design better image matching methods with superior performance in accuracy, robustness and efficiency. This encourages us to conduct a comprehensive and systematic review and analysis for those classical and latest techniques. Following the feature-based image matching pipeline, we first introduce feature detection, description, and matching techniques from handcrafted methods to trainable ones and provide an analysis of the development of these methods in theory and practice. Secondly, we briefly introduce several typical image matching-based applications for a comprehensive understanding of the significance of image matching. In addition, we also provide a comprehensive and objective comparison of these classical and latest techniques through extensive experiments on representative datasets. Finally, we conclude with the current status of image matching technologies and deliver insightful discussions and prospects for future works. This survey can serve as a reference for (but not limited to) researchers and engineers in image matching and related fields.
Collapse
|
260
|
Trukhan S, Tafintseva V, Tøndel K, Großerueschkamp F, Mosig A, Kovalev V, Gerwert K, Kohler A. Grayscale representation of infrared microscopy images by extended multiplicative signal correction for registration with histological images. JOURNAL OF BIOPHOTONICS 2020; 13:e201960223. [PMID: 32352634 DOI: 10.1002/jbio.201960223] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 04/20/2020] [Accepted: 04/21/2020] [Indexed: 06/11/2023]
Abstract
Fourier-transform infrared (FTIR) microspectroscopy is rounding the corner to become a label-free routine method for cancer diagnosis. In order to build infrared-spectral based classifiers, infrared images need to be registered with Hematoxylin and Eosin (H&E) stained histological images. While FTIR images have a deep spectral domain with thousands of channels carrying chemical and scatter information, the H&E images have only three color channels for each pixel and carry mainly morphological information. Therefore, image representations of infrared images are needed that match the morphological information in H&E images. In this paper, we propose a novel approach for representation of FTIR images based on extended multiplicative signal correction highlighting morphological features that showed to correlate well with morphological information in H&E images. Based on the obtained representations, we developed a strategy for global-to-local image registration for FTIR images and H&E stained histological images of parallel tissue sections.
Collapse
Affiliation(s)
- Stanislau Trukhan
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
- Department of Biomedical Image Analysis, United Institute of Informatics Problems, Minsk, Belarus
| | - Valeria Tafintseva
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Kristin Tøndel
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Frederik Großerueschkamp
- Departament of Biophysics, Ruhr University Bochum, Bochum, Germany
- Center for Protein Diagnostics (ProDi), Ruhr University Bochum, Bochum, Germany
| | - Axel Mosig
- Departament of Biophysics, Ruhr University Bochum, Bochum, Germany
- Center for Protein Diagnostics (ProDi), Ruhr University Bochum, Bochum, Germany
| | - Vassili Kovalev
- Department of Biomedical Image Analysis, United Institute of Informatics Problems, Minsk, Belarus
| | - Klaus Gerwert
- Departament of Biophysics, Ruhr University Bochum, Bochum, Germany
- Center for Protein Diagnostics (ProDi), Ruhr University Bochum, Bochum, Germany
| | - Achim Kohler
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| |
Collapse
|
261
|
Guo Y, Wu X, Wang Z, Pei X, Xu XG. End-to-end unsupervised cycle-consistent fully convolutional network for 3D pelvic CT-MR deformable registration. J Appl Clin Med Phys 2020; 21:193-200. [PMID: 32657533 PMCID: PMC7497923 DOI: 10.1002/acm2.12968] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 06/07/2020] [Accepted: 06/10/2020] [Indexed: 12/31/2022] Open
Abstract
Objective To improve the efficiency of computed tomography (CT)‐magnetic resonance (MR) deformable image registration while ensuring the registration accuracy. Methods Two fully convolutional networks (FCNs) for generating spatial deformable grids were proposed using the Cycle‐Consistent method to ensure the deformed image consistency with the reference image data. In all, 74 pelvic cases consisting of both MR and CT images were studied, among which 64 cases were used as training data and 10 cases as the testing data. All training data were standardized and normalized, following simple image preparation to remove the redundant air. Dice coefficients and average surface distance (ASD) were calculated for regions of interest (ROI) of CT‐MR image pairs, before and after the registration. The performance of the proposed method (FCN with Cycle‐Consistent) was compared with that of Elastix software, MIM software, and FCN without cycle‐consistent. Results The results show that the proposed method achieved the best performance among the four registration methods tested in terms of registration accuracy and the method was more stable than others in general. In terms of average registration time, Elastix took 64 s, MIM software took 28 s, and the proposed method was found to be significantly faster, taking <0.1 s. Conclusion The proposed method not only ensures the accuracy of deformable image registration but also greatly reduces the time required for image registration and improves the efficiency of the registration process. In addition, compared with other deep learning methods, the proposed method is completely unsupervised and end‐to‐end.
Collapse
Affiliation(s)
- Yi Guo
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Xiangyi Wu
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Zhi Wang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China.,Department of Radiology, The First Affiliated Hospital of Anhui Medical University of China, Hefei, Anhui, China
| | - Xi Pei
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - X George Xu
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China.,Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
262
|
A Registration and Deep Learning Approach to Automated Landmark Detection for Geometric Morphometrics. Evol Biol 2020; 47:246-259. [PMID: 33583965 DOI: 10.1007/s11692-020-09508-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Geometric morphometrics is the statistical analysis of landmark-based shape variation and its covariation with other variables. Over the past two decades, the gold standard of landmark data acquisition has been manual detection by a single observer. This approach has proven accurate and reliable in small-scale investigations. However, big data initiatives are increasingly common in biology and morphometrics. This requires fast, automated, and standardized data collection. We combine techniques from image registration, geometric morphometrics, and deep learning to automate and optimize anatomical landmark detection. We test our method on high-resolution, micro-computed tomography images of adult mouse skulls. To ensure generalizability, we use a morphologically diverse sample and implement fundamentally different deformable registration algorithms. Compared to landmarks derived from conventional image registration workflows, our optimized landmark data show up to a 39.1% reduction in average coordinate error and a 36.7% reduction in total distribution error. In addition, our landmark optimization produces estimates of the sample mean shape and variance-covariance structure that are statistically indistinguishable from expert manual estimates. For biological imaging datasets and morphometric research questions, our approach can eliminate the time and subjectivity of manual landmark detection whilst retaining the biological integrity of these expert annotations.
Collapse
|
263
|
Zhu Z, Cao Y, Qin C, Rao Y, Ni D, Wang Y. Unsupervised 3D End-to-end Deformable Network for Brain MRI Registration. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1355-1359. [PMID: 33018240 DOI: 10.1109/embc44109.2020.9176475] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Volumetric medical image registration has important clinical significance. Traditional registration methods may be time-consuming when processing large volumetric data due to their iterative optimizations. In contrast, existing deep learning-based networks can obtain the registration quickly. However, most of them require independent rigid alignment before deformable registration; these two steps are often performed separately and cannot be end-to-end. Moreover, registration ground-truth is difficult to obtain for supervised learning methods. To tackle the above issues, we propose an unsupervised 3D end-to-end deformable registration network. The proposed network cascades two subnetworks; the first one is for obtaining affine alignment, and the second one is a deformable subnetwork for achieving the non-rigid registration. The parameters of the two subnetworks are shared. The global and local similarity measures are used as loss functions for the two subnetworks, respectively. The trained network can perform end-to-end deformable registration. We conducted experiments on brain MRI datasets (LPBA40, Mindboggle101, and IXI) and experimental results demonstrate the efficacy of the proposed registration network.
Collapse
|
264
|
Fechter T, Baltas D. One-Shot Learning for Deformable Medical Image Registration and Periodic Motion Tracking. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2506-2517. [PMID: 32054571 DOI: 10.1109/tmi.2020.2972616] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Deformable image registration is a very important field of research in medical imaging. Recently multiple deep learning approaches were published in this area showing promising results. However, drawbacks of deep learning methods are the need for a large amount of training datasets and their inability to register unseen images different from the training datasets. One shot learning comes without the need of large training datasets and has already been proven to be applicable to 3D data. In this work we present a one shot registration approach for periodic motion tracking in 3D and 4D datasets. When applied to a 3D dataset the algorithm calculates the inverse of the registration vector field simultaneously. For registration we employed a U-Net combined with a coarse to fine approach and a differential spatial transformer module. The algorithm was thoroughly tested with multiple 4D and 3D datasets publicly available. The results show that the presented approach is able to track periodic motion and to yield a competitive registration accuracy. Possible applications are the use as a stand-alone algorithm for 3D and 4D motion tracking or in the beginning of studies until enough datasets for a separate training phase are available.
Collapse
|
265
|
Tian Y, Hu Y, Ma Y, Hao H, Mou L, Yang J, Zhao Y, Liu J. Multi-scale U-net with Edge Guidance for Multimodal Retinal Image Deformable Registration. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1360-1363. [PMID: 33018241 DOI: 10.1109/embc44109.2020.9175613] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Registration of multimodal retinal images is of great importance in facilitating the diagnosis and treatment of many eye diseases, such as the registration between color fundus images and optical coherence tomography (OCT) images. However, it is difficult to obtain ground truth, and most existing algorithms are for rigid registration without considering the optical distortion. In this paper, we present an unsupervised learning method for deformable registration between the two images. To solve the registration problem, the structure achieves a multi-level receptive field and takes contour and local detail into account. To measure the edge difference caused by different distortions in the optics center and edge, an edge similarity (ES) loss term is proposed, so loss function is composed by local cross-correlation, edge similarity and diffusion regularizer on the spatial gradients of the deformation matrix. Thus, we propose a multi-scale input layer, U-net with dilated convolution structure, squeeze excitation (SE) block and spatial transformer layers. Quantitative experiments prove the proposed framework is best compared with several conventional and deep learningbased methods, and our ES loss and structure combined with Unet and multi-scale layers achieve competitive results for normal and abnormal images.
Collapse
|
266
|
Li Q, Li S, Wu Y, Guo W, Qi S, Huang G, Chen S, Liu Z, Chen X. Orientation-independent Feature Matching (OIFM) for Multimodal Retinal Image Registration. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101957] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
267
|
Zeng Q, Fu Y, Tian Z, Lei Y, Zhang Y, Wang T, Mao H, Liu T, Curran WJ, Jani AB, Patel P, Yang X. Label-driven magnetic resonance imaging (MRI)-transrectal ultrasound (TRUS) registration using weakly supervised learning for MRI-guided prostate radiotherapy. Phys Med Biol 2020; 65:135002. [PMID: 32330922 DOI: 10.1088/1361-6560/ab8cd6] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Registration and fusion of magnetic resonance imaging (MRI) and transrectal ultrasound (TRUS) of the prostate can provide guidance for prostate brachytherapy. However, accurate registration remains a challenging task due to the lack of ground truth regarding voxel-level spatial correspondence, limited field of view, low contrast-to-noise ratio, and signal-to-noise ratio in TRUS. In this study, we proposed a fully automated deep learning approach based on a weakly supervised method to address these issues. We employed deep learning techniques to combine image segmentation and registration, including affine and nonrigid registration, to perform an automated deformable MRI-TRUS registration. To start with, we trained two separate fully convolutional neural networks (CNNs) to perform a pixel-wise prediction for MRI and TRUS prostate segmentation. Then, to provide the initialization of the registration, a 2D CNN was used to register MRI-TRUS prostate images using an affine registration. After that, a 3D UNET-like network was applied for nonrigid registration. For both the affine and nonrigid registration, pairs of MRI-TRUS labels were concatenated and fed into the neural networks for training. Due to the unavailability of ground-truth voxel-level correspondences and the lack of accurate intensity-based image similarity measures, we propose to use prostate label-derived volume overlaps and surface agreements as an optimization objective function for weakly supervised network training. Specifically, we proposed a hybrid loss function that integrated a Dice loss, a surface-based loss, and a bending energy regularization loss for the nonrigid registration. The Dice and surface-based losses were used to encourage the alignment of the prostate label between the MRI and the TRUS. The bending energy regularization loss was used to achieve a smooth deformation field. Thirty-six sets of patient data were used to test our registration method. The image registration results showed that the deformed MR image aligned well with the TRUS image, as judged by corresponding cysts and calcifications in the prostate. The quantitative results showed that our method produced a mean target registration error (TRE) of 2.53 ± 1.39 mm and a mean Dice loss of 0.91 ± 0.02. The mean surface distance (MSD) and Hausdorff distance (HD) between the registered MR prostate shape and TRUS prostate shape were 0.88 and 4.41 mm, respectively. This work presents a deep learning-based, weakly supervised network for accurate MRI-TRUS image registration. Our proposed method has achieved promising registration performance in terms of Dice loss, TRE, MSD, and HD.
Collapse
Affiliation(s)
- Qiulan Zeng
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, United States of America
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
268
|
Fast graph-cut based optimization for practical dense deformable registration of volume images. Comput Med Imaging Graph 2020; 84:101745. [PMID: 32623293 DOI: 10.1016/j.compmedimag.2020.101745] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Revised: 03/30/2020] [Accepted: 05/29/2020] [Indexed: 11/22/2022]
Abstract
Deformable image registration is a fundamental problem in medical image analysis, with applications such as longitudinal studies, population modeling, and atlas-based image segmentation. Registration is often phrased as an optimization problem, i.e., finding a deformation field that is optimal according to a given objective function. Discrete, combinatorial, optimization techniques have successfully been employed to solve the resulting optimization problem. Specifically, optimization based on α-expansion with minimal graph cuts has been proposed as a powerful tool for image registration. The high computational cost of the graph-cut based optimization approach, however, limits the utility of this approach for registration of large volume images. Here, we propose to accelerate graph-cut based deformable registration by dividing the image into overlapping sub-regions and restricting the α-expansion moves to a single sub-region at a time. We demonstrate empirically that this approach can achieve a large reduction in computation time - from days to minutes - with only a small penalty in terms of solution quality. The reduction in computation time provided by the proposed method makes graph-cut based deformable registration viable for large volume images. Graph-cut based image registration has previously been shown to produce excellent results, but the high computational cost has hindered the adoption of the method for registration of large medical volume images. Our proposed method lifts this restriction, requiring only a small fraction of the computational cost to produce results of comparable quality.
Collapse
|
269
|
Mang A, Bakas S, Subramanian S, Davatzikos C, Biros G. Integrated Biophysical Modeling and Image Analysis: Application to Neuro-Oncology. Annu Rev Biomed Eng 2020; 22:309-341. [PMID: 32501772 PMCID: PMC7520881 DOI: 10.1146/annurev-bioeng-062117-121105] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Central nervous system (CNS) tumors come with vastly heterogeneous histologic, molecular, and radiographic landscapes, rendering their precise characterization challenging. The rapidly growing fields of biophysical modeling and radiomics have shown promise in better characterizing the molecular, spatial, and temporal heterogeneity of tumors. Integrative analysis of CNS tumors, including clinically acquired multi-parametric magnetic resonance imaging (mpMRI) and the inverse problem of calibrating biophysical models to mpMRI data, assists in identifying macroscopic quantifiable tumor patterns of invasion and proliferation, potentially leading to improved (a) detection/segmentation of tumor subregions and (b) computer-aided diagnostic/prognostic/predictive modeling. This article presents a summary of (a) biophysical growth modeling and simulation,(b) inverse problems for model calibration, (c) these models' integration with imaging workflows, and (d) their application to clinically relevant studies. We anticipate that such quantitative integrative analysis may even be beneficial in a future revision of the World Health Organization (WHO) classification for CNS tumors, ultimately improving patient survival prospects.
Collapse
Affiliation(s)
- Andreas Mang
- Department of Mathematics, University of Houston, Houston, Texas 77204, USA;
| | - Spyridon Bakas
- Department of Mathematics, University of Houston, Houston, Texas 77204, USA;
| | - Shashank Subramanian
- Oden Institute of Computational Engineering and Sciences, The University of Texas at Austin, Austin, Texas 78712, USA; ,
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA); Department of Radiology; and Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA; ,
| | - George Biros
- Oden Institute of Computational Engineering and Sciences, The University of Texas at Austin, Austin, Texas 78712, USA; ,
| |
Collapse
|
270
|
Sun L, Shao W, Zhang D, Liu M. Anatomical Attention Guided Deep Networks for ROI Segmentation of Brain MR Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2000-2012. [PMID: 31899417 DOI: 10.1109/tmi.2019.2962792] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Brain region-of-interest (ROI) segmentation based on structural magnetic resonance imaging (MRI) scans is an essential step for many computer-aid medical image analysis applications. Due to low intensity contrast around ROI boundary and large inter-subject variance, it has been remaining a challenging task to effectively segment brain ROIs from structural MR images. Even though several deep learning methods for brain MR image segmentation have been developed, most of them do not incorporate shape priors to take advantage of the regularity of brain structures, thus leading to sub-optimal performance. To address this issue, we propose an anatomical attention guided deep learning framework for brain ROI segmentation of structural MR images, containing two subnetworks. The first one is a segmentation subnetwork, used to simultaneously extract discriminative image representation and segment ROIs for each input MR image. The second one is an anatomical attention subnetwork, designed to capture the anatomical structure information of the brain from a set of labeled atlases. To utilize the anatomical attention knowledge learned from atlases, we develop an anatomical gate architecture to fuse feature maps derived from a set of atlas label maps and those from the to-be-segmented image for brain ROI segmentation. In this way, the anatomical prior learned from atlases can be explicitly employed to guide the segmentation process for performance improvement. Within this framework, we develop two anatomical attention guided segmentation models, denoted as anatomical gated fully convolutional network (AG-FCN) and anatomical gated U-Net (AG-UNet), respectively. Experimental results on both ADNI and LONI-LPBA40 datasets suggest that the proposed AG-FCN and AG-UNet methods achieve superior performance in ROI segmentation of brain MR images, compared with several state-of-the-art methods.
Collapse
|
271
|
Scheufele K, Subramanian S, Mang A, Biros G, Mehl M. IMAGE-DRIVEN BIOPHYSICAL TUMOR GROWTH MODEL CALIBRATION. SIAM JOURNAL ON SCIENTIFIC COMPUTING : A PUBLICATION OF THE SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS 2020; 42:B549-B580. [PMID: 33071533 PMCID: PMC7561052 DOI: 10.1137/19m1275280] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present a novel formulation for the calibration of a biophysical tumor growth model from a single-time snapshot, multiparametric magnetic resonance imaging (MRI) scan of a glioblastoma patient. Tumor growth models are typically nonlinear parabolic partial differential equations (PDEs). Thus, we have to generate a second snapshot to be able to extract significant information from a single patient snapshot. We create this two-snapshot scenario as follows. We use an atlas (an average of several scans of healthy individuals) as a substitute for an earlier, pretumor, MRI scan of the patient. Then, using the patient scan and the atlas, we combine image-registration algorithms and parameter estimation algorithms to achieve a better estimate of the healthy patient scan and the tumor growth parameters that are consistent with the data. Our scheme is based on our recent work (Scheufele et al., Comput. Methods Appl. Mech. Engrg., to appear), but we apply a different and novel scheme where the tumor growth simulation in contrast to the previous work is executed in the patient brain domain and not in the atlas domain yielding more meaningful patient-specific results. As a basis, we use a PDE-constrained optimization framework. We derive a modified Picard-iteration-type solution strategy in which we alternate between registration and tumor parameter estimation in a new way. In addition, we consider an ℓ 1 sparsity constraint on the initial condition for the tumor and integrate it with the new joint inversion scheme. We solve the sub-problems with a reduced space, inexact Gauss-Newton-Krylov/quasi-Newton method. We present results using real brain data with synthetic tumor data that show that the new scheme reconstructs the tumor parameters in a more accurate and reliable way compared to our earlier scheme.
Collapse
Affiliation(s)
- Klaudius Scheufele
- Institut for Parallel and Distributed Systems, Universität Stuttgart, Universitätsstraße 38, 70569, Stuttgart, Germany
| | - Shashank Subramanian
- Oden Institute for Computational Engineering and Sciences, University of Austin, 201 E. 24th Street, Austin, TX 78712-1229
| | - Andreas Mang
- Department of Mathematics, University of Houston, 3551 Cullen Blvd., Houston, TX 77204-3008
| | - George Biros
- Oden Institute for Computational Engineering and Sciences, University of Austin, 201 E. 24th Street, Austin, TX 78712-1229
| | - Miriam Mehl
- Institut for Parallel and Distributed Systems, Universität Stuttgart, Universitätsstraße 38, 70569, Stuttgart, Germany
| |
Collapse
|
272
|
Suojärvi N, Tampio J, Lindfors N, Waris E. Computer-aided 3D analysis of anatomy and radiographic parameters of the distal radius. Clin Anat 2020; 34:574-580. [PMID: 32346905 DOI: 10.1002/ca.23615] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 04/21/2020] [Accepted: 04/23/2020] [Indexed: 12/11/2022]
Abstract
INTRODUCTION This study applied mathematical modeling to examine the anatomy of the distal radius; to define the radiographic parameters in a 3D imaging modality; and to report their normal ranges in the uninjured radius. MATERIALS AND METHODS A series of 50 cone-beam computed tomography (CBCT) scans of uninjured radii were analyzed using computer-aided image processing. The radius shape model was used to determine the optimal location for measuring the longitudinal axis. With the axis determined, the volar tilt and radial inclination angles and the areas of the articular facets and their reference points were analyzed. RESULTS The optimal location for determining the longitudinal axis was between 28.8 and 53.3 mm proximally from the articular surface. The mean radial inclination angle was 21.8°. The mean volar tilt angle via the most distal tips of the volar and dorsal rims was 13.0°; along the lunate and scaphoid facets it was 9.1° and 11.2°, respectively. The scaphoid facet was larger than the lunate facet and 25% of it was convex. CONCLUSIONS Computer-aided CBCT image processing offers an advanced tool to record 3D geometry and the radiographic parameters of the osseous structures of the wrist. Analysis of the distal radius' anatomy showed that the longitudinal axis was affected by its measurement location and subsequently also affected the determination of the angular parameters. We also report the variation of the volar tilt along the articular surface and the shapes and sizes of the articular facets.
Collapse
Affiliation(s)
- Nora Suojärvi
- Department of Hand Surgery, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | | | - Nina Lindfors
- Department of Hand Surgery, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Eero Waris
- Department of Hand Surgery, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| |
Collapse
|
273
|
Maynard E, Heath E, Hilts M, Jirasek A. Evaluation of an x-ray CT polymer gel dosimetry system in the measurement of deformed dose. Biomed Phys Eng Express 2020; 6:035031. [PMID: 33438676 DOI: 10.1088/2057-1976/ab895a] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
This study is an evaluation of the use of a N-isopropylacrylamide (NIPAM)-based x-ray CT polymer gel dosimetry (PGD) system in the measurement of deformed dose. This work also compares dose that is measured by the gel dosimetry system to dose calculated by a novel deformable dose accumulation algorithm, defDOSXYZnrc, that uses direct voxel tracking. Deformable gels were first irradiated using a single 3.5 × 5 cm2 open field and the static dose was compared to defDOSXYZnrc as a control measurement. Gel measurement was found to be in excellent agreement with defDOSXYZnrc in the static case with gamma passing rates of 94.5% using a 3%/3 mm criterion and 93.3% using a 3%/2 mm criterion. Following the static measurements, a deformable gel was irradiated with the same single field under an external compression of 25 mm and then released from this compression for dosimetric read out. The measured deformed dose was then compared to deformed dose calculated by defDOSXYZnrc based on deformation vectors produced by the Velocity AI deformable image registration (DIR) algorithm. In the deformed dose distribution there were differences in the measured and calculated field position of up to 0.8 mm and differences in the measured in calculated field size of up to 11.9 mm. Gamma pass rates were 60.0% using a 3%/3 mm criterion and 56.8% using a 3%/2 mm criterion for the deforming measurements representing a decrease in agreement compared to the control measurements. Further analysis showed that passing rates increased to 86.5% using a 3%/3 mm criterion and 70.5% using a 3%/2 mm criterion in voxels within 5 mm of fiducial markers used to guide the deformable image registration. This work represents the first measurement of deformed dose using x-ray CT polymer gel dosimetry. Overall these results highlight some of the challenges in the calculation and measurement of deforming dose and provide insight into possible strategies for improvement.
Collapse
Affiliation(s)
- E Maynard
- Department of Physics and Astronomy, University of Victoria, Victoria, BC V8W 2Y2, Canada
| | | | | | | |
Collapse
|
274
|
Zhang X, Gilliam C, Blu T. All-pass Parametric Image Registration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:5625-5640. [PMID: 32275596 DOI: 10.1109/tip.2020.2984897] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Image registration is a required step in many practical applications that involve the acquisition of multiple related images. In this paper, we propose a methodology to deal with both the geometric and intensity transformations in the image registration problem. The main idea is to modify an accurate and fast elastic registration algorithm (Local All-Pass-LAP) so that it returns a parametric displacement field, and to estimate the intensity changes by fitting another parametric expression. Although we demonstrate the methodology using a low-order parametric model, our approach is highly flexible and easily allows substantially richer parametrisations, while requiring only limited extra computation cost. In addition, we propose two novel quantitative criteria to evaluate the accuracy of the alignment of two images ("salience correlation") and the number of degrees of freedom ("parsimony") of a displacement field, respectively. Experimental results on both synthetic and real images demonstrate the high accuracy and computational efficiency of our methodology. Furthermore, we demonstrate that the resulting displacement fields are more parsimonious than the ones obtained in other state-of-the-art image registration approaches.
Collapse
|
275
|
Iqbal T, Shah SK, Ullah F, Mehmood S, Zeb MA. Analysis of deformable distortion in the architecture of leaf xylary vessel elements of Carthamus oxycantha caused by heavy metals stress using image registration. Microsc Res Tech 2020; 83:843-849. [PMID: 32233100 DOI: 10.1002/jemt.23476] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 03/11/2020] [Indexed: 11/05/2022]
Abstract
Anatomical study of leaf xylary vessel elements of Carthamus oxycantha under various intensities of lead (Pb) and nickel (Ni) stress (200, 400, 600, and 800 mg Pb(NO3 )2 , NiCl2 ·6H2 O/kg of the soil) was conducted. The deformations caused due to metal stress were detected using point-based image registration technique. Initially, a set of corresponding feature points called landmarks was selected for warping of two-dimensional microscopic images of deformed/source vessel (stressed) to its normal/target (unstressed) counterpart. The results show that the target registration error is less than 3 mm using real plant image datasets. The stress caused alterations mainly in diameter, size, and shape of the cells. Average cell diameter and average wall diameter of vessels were measured with "Image J." The range of decrease in average cell diameter from 18.566 to 13.1 μm and the range of increase in average wall diameter was from 5.166 to 10.1 μm, with increase in stress factor through 200, 400, 600, and 800 mg Pb(NO3 )2 , NiCl2 ·6H2 O/kg of the soil. We noted large deformation in the form of shrinkage in cell size and diminution in its diameter. The diminution in diameter and the shrinkage in cell size of vessel cells may be due to the deposition of wall materials. It can be a possible strategy to limit the water flow to overcome the rapid mobility and transportation of the excess amount of metals to safeguard the cellular components from unpleasant consequences of metallic stress.
Collapse
Affiliation(s)
- Tahir Iqbal
- Department of Botany, University of Science and Technology Bannu, KP, Pakistan
| | - Said K Shah
- Department of Computer Sciences, University of Science and Technology Bannu, KP, Pakistan
| | - Faizan Ullah
- Department of Botany, University of Science and Technology Bannu, KP, Pakistan
| | - Sultan Mehmood
- Department of Botany, University of Science and Technology Bannu, KP, Pakistan
| | - Muhammad A Zeb
- Department of Botany, University of Science and Technology Bannu, KP, Pakistan
| |
Collapse
|
276
|
Abdullah Al W, Yun ID. Partial Policy-Based Reinforcement Learning for Anatomical Landmark Localization in 3D Medical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1245-1255. [PMID: 31603816 DOI: 10.1109/tmi.2019.2946345] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Utilizing the idea of long-term cumulative return, reinforcement learning (RL) has shown remarkable performance in various fields. We follow the formulation of landmark localization in 3D medical images as an RL problem. Whereas value-based methods have been widely used to solve RL-based localization problems, we adopt an actor-critic based direct policy search method framed in a temporal difference learning approach. In RL problems with large state and/or action spaces, learning the optimal behavior is challenging and requires many trials. To improve the learning, we introduce a partial policy-based reinforcement learning to enable solving the large problem of localization by learning the optimal policy on smaller partial domains. Independent actors efficiently learn the corresponding partial policies, each utilizing their own independent critic. The proposed policy reconstruction from the partial policies ensures a robust and efficient localization, where the sub-agents uniformly contribute to the state-transitions based on their simple partial policies mapping to binary actions. Experiments with three different localization problems in 3D CT and MR images showed that the proposed reinforcement learning requires a significantly smaller number of trials to learn the optimal behavior compared to the original behavior learning scheme in RL. It also ensures a satisfactory performance when trained on fewer images.
Collapse
|
277
|
Cui Z, Mahmoodi S, Guy M, Lewis E, Havelock T, Bennett M, Conway J. A general framework in single and multi-modality registration for lung imaging analysis using statistical prior shapes. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105232. [PMID: 31809995 DOI: 10.1016/j.cmpb.2019.105232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2018] [Revised: 07/04/2019] [Accepted: 11/17/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE A fusion of multi-slice computed tomography (MSCT) and single photon emission computed tomography (SPECT) represents a powerful tool for chronic obstructive pulmonary disease (COPD) analysis. In this paper, a novel and high-performance MSCT/SPECT non-rigid registration algorithm is proposed to accurately map the lung lobe information onto the functional imaging. Such a fusion can then be used to guide lung volume reduction surgery. METHODS The multi-modality fusion method proposed here is developed by a multi-channel technique which performs registration from MSCT scan to ventilation and perfusion SPECT scans simultaneously. Furthermore, a novel function with less parameters is also proposed to avoid the adjustment of the weighting parameter and to achieve a better performance in comparison with the exisitng methods in the literature. RESULTS A lung imaging dataset from a hospital and a synthetic dataset created by software are employed to validate single- and multi-modality registration results. Our method is demonstrated to achieve the improvements in terms of registration accuracy and stability by up to 23% and 54% respectively. Our multi-channel technique proposed here is also proved to obtain improved registration accuracy in comparison with single-channel method. CONCLUSIONS The fusion of lung lobes onto SPECT imaging is achievable by accurate MSCT/SPECT alignment. It can also be used to perform lobar lung activity analysis for COPD diagnosis and treatment.
Collapse
Affiliation(s)
- Zheng Cui
- School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, United Kingdom.
| | - Sasan Mahmoodi
- School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, United Kingdom.
| | - Matthew Guy
- Department of Imaging Physics, University Hospital Southampton NHS Foundation Trust, Southampton, SO16 6YD, United Kingdom
| | - Emma Lewis
- Scientific Computing Section, Royal Surrey County Hospital NHS Foundation Trust, GuildfordGU2 7XX, United Kingdom
| | - Tom Havelock
- Southampton NIHR Respiratory Biomedical Research Unit, University Hospital Southampton NHS Foundation Trust, Southampton SO16 6YD, United Kingdom
| | - Michael Bennett
- Southampton NIHR Respiratory Biomedical Research Unit, University Hospital Southampton NHS Foundation Trust, Southampton SO16 6YD, United Kingdom
| | - Joy Conway
- Southampton NIHR Respiratory Biomedical Research Unit, University Hospital Southampton NHS Foundation Trust, Southampton SO16 6YD, United Kingdom
| |
Collapse
|
278
|
Fu Y, Lei Y, Wang T, Higgins K, Bradley JD, Curran WJ, Liu T, Yang X. LungRegNet: An unsupervised deformable image registration method for 4D-CT lung. Med Phys 2020; 47:1763-1774. [PMID: 32017141 PMCID: PMC7165051 DOI: 10.1002/mp.14065] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 01/09/2020] [Accepted: 01/27/2020] [Indexed: 12/11/2022] Open
Abstract
PURPOSE To develop an accurate and fast deformable image registration (DIR) method for four-dimensional computed tomography (4D-CT) lung images. Deep learning-based methods have the potential to quickly predict the deformation vector field (DVF) in a few forward predictions. We have developed an unsupervised deep learning method for 4D-CT lung DIR with excellent performances in terms of registration accuracies, robustness, and computational speed. METHODS A fast and accurate 4D-CT lung DIR method, namely LungRegNet, was proposed using deep learning. LungRegNet consists of two subnetworks which are CoarseNet and FineNet. As the name suggests, CoarseNet predicts large lung motion on a coarse scale image while FineNet predicts local lung motion on a fine scale image. Both the CoarseNet and FineNet include a generator and a discriminator. The generator was trained to directly predict the DVF to deform the moving image. The discriminator was trained to distinguish the deformed images from the original images. CoarseNet was first trained to deform the moving images. The deformed images were then used by the FineNet for FineNet training. To increase the registration accuracy of the LungRegNet, we generated vessel-enhanced images by generating pulmonary vasculature probability maps prior to the network prediction. RESULTS We performed fivefold cross validation on ten 4D-CT datasets from our department. To compare with other methods, we also tested our method using separate 10 DIRLAB datasets that provide 300 manual landmark pairs per case for target registration error (TRE) calculation. Our results suggested that LungRegNet has achieved better registration accuracy in terms of TRE than other deep learning-based methods available in the literature on DIRLAB datasets. Compared to conventional DIR methods, LungRegNet could generate comparable registration accuracy with TRE smaller than 2 mm. The integration of both the discriminator and pulmonary vessel enhancements into the network was crucial to obtain high registration accuracy for 4D-CT lung DIR. The mean and standard deviation of TRE were 1.00 ± 0.53 mm and 1.59 ± 1.58 mm on our datasets and DIRLAB datasets respectively. CONCLUSIONS An unsupervised deep learning-based method has been developed to rapidly and accurately register 4D-CT lung images. LungRegNet has outperformed its deep-learning-based peers and achieved excellent registration accuracy in terms of TRE.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
279
|
Takagi H, Kadoya N, Kajikawa T, Tanaka S, Takayama Y, Chiba T, Ito K, Dobashi S, Takeda K, Jingu K. Multi-atlas-based auto-segmentation for prostatic urethra using novel prediction of deformable image registration accuracy. Med Phys 2020; 47:3023-3031. [PMID: 32201958 DOI: 10.1002/mp.14154] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 02/04/2020] [Accepted: 03/14/2020] [Indexed: 01/13/2023] Open
Abstract
PURPOSE Accurate identification of the prostatic urethra and bladder can help determine dosing and evaluate urinary toxicity during intensity-modulated radiation therapy (IMRT) planning in patients with localized prostate cancer. However, it is challenging to locate the prostatic urethra in planning computed tomography (pCT). In the present study, we developed a multiatlas-based auto-segmentation method for prostatic urethra identification using deformable image registration accuracy prediction with machine learning (ML) and assessed its feasibility. METHODS We examined 120 patients with prostate cancer treated with IMRT. All patients underwent temporary urinary catheter placement for identification and contouring of the prostatic urethra in pCT images (ground truth). Our method comprises the following three steps: (a) select four atlas datasets from the atlas datasets using the deformable image registration (DIR) accuracy prediction model, (b) deform them by structure-based DIR, (3) and propagate urethra contour using displacement vector field calculated by the DIR. In (a), for identifying suitable datasets, we used the trained support vector machine regression (SVR) model and five feature descriptors (e.g., prostate volume) to increase DIR accuracy. This method was trained/validated using 100 patients and performance was evaluated within an independent test set of 20 patients. Fivefold cross-validation was used to optimize the hype parameters of the DIR accuracy prediction model. We assessed the accuracy of our method by comparing it with those of two others: Acostas method-based patient selection (previous study method, by Acosta et al.), and the Waterman's method (defines the prostatic urethra based on the center of the prostate, by Waterman et al.). We used the centerlines distance (CLD) between the ground truth and the predicted prostatic urethra as the evaluation index. RESULTS The CLD in the entire prostatic urethra was 2.09 ± 0.89 mm (our proposed method), 2.77 ± 0.99 mm (Acosta et al., P = 0.022), and 3.47 ± 1.19 mm (Waterman et al., P < 0.001); our proposed method showed the highest accuracy. In segmented CLD, CLD in the top 1/3 segment was highly improved from that of Waterman et.al. and was slightly improved from that of Acosta et.al., with results of 2.49 ± 1.78 mm (our proposed method), 2.95 ± 1.75 mm (Acosta et al., P = 0.42), and 5.76 ± 3.09 mm (Waterman et al., P < 0.001). CONCLUSIONS We developed a DIR accuracy prediction model-based multiatlas-based auto-segmentation method for prostatic urethra identification. Our method identified prostatic urethra with mean error of 2.09 mm, likely due to combined effects of SVR model employment in patient selection, modified atlas dataset characteristics and DIR algorithm. Our method has potential utility in prostate cancer IMRT and can replace use of temporary indwelling urinary catheters.
Collapse
Affiliation(s)
- Hisamichi Takagi
- Course of Radiological Technology, Health Sciences, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8575, Japan
| | - Noriyuki Kadoya
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8574, Japan
| | - Tomohiro Kajikawa
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8574, Japan
| | - Shohei Tanaka
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8574, Japan
| | - Yoshiki Takayama
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8574, Japan
| | - Takahito Chiba
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8574, Japan
| | - Kengo Ito
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8574, Japan
| | - Suguru Dobashi
- Course of Radiological Technology, Health Sciences, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8575, Japan
| | - Ken Takeda
- Course of Radiological Technology, Health Sciences, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8575, Japan
| | - Keiichi Jingu
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8574, Japan
| |
Collapse
|
280
|
Estienne T, Lerousseau M, Vakalopoulou M, Alvarez Andres E, Battistella E, Carré A, Chandra S, Christodoulidis S, Sahasrabudhe M, Sun R, Robert C, Talbot H, Paragios N, Deutsch E. Deep Learning-Based Concurrent Brain Registration and Tumor Segmentation. Front Comput Neurosci 2020; 14:17. [PMID: 32265680 PMCID: PMC7100603 DOI: 10.3389/fncom.2020.00017] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Accepted: 02/11/2020] [Indexed: 01/30/2023] Open
Abstract
Image registration and segmentation are the two most studied problems in medical image analysis. Deep learning algorithms have recently gained a lot of attention due to their success and state-of-the-art results in variety of problems and communities. In this paper, we propose a novel, efficient, and multi-task algorithm that addresses the problems of image registration and brain tumor segmentation jointly. Our method exploits the dependencies between these tasks through a natural coupling of their interdependencies during inference. In particular, the similarity constraints are relaxed within the tumor regions using an efficient and relatively simple formulation. We evaluated the performance of our formulation both quantitatively and qualitatively for registration and segmentation problems on two publicly available datasets (BraTS 2018 and OASIS 3), reporting competitive results with other recent state-of-the-art methods. Moreover, our proposed framework reports significant amelioration (p < 0.005) for the registration performance inside the tumor locations, providing a generic method that does not need any predefined conditions (e.g., absence of abnormalities) about the volumes to be registered. Our implementation is publicly available online at https://github.com/TheoEst/joint_registration_tumor_segmentation.
Collapse
Affiliation(s)
- Théo Estienne
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, Gif-sur-Yvette, France
| | - Marvin Lerousseau
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Maria Vakalopoulou
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, Gif-sur-Yvette, France
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Emilie Alvarez Andres
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
| | - Enzo Battistella
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, Gif-sur-Yvette, France
| | - Alexandre Carré
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
| | - Siddhartha Chandra
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Stergios Christodoulidis
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Predictive Biomarkers and Novel Therapeutic Strategies in Oncology, Villejuif, France
| | - Mihir Sahasrabudhe
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Roger Sun
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Charlotte Robert
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
| | - Hugues Talbot
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Nikos Paragios
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
| | - Eric Deutsch
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
| |
Collapse
|
281
|
McKenzie EM, Santhanam A, Ruan D, O'Connor D, Cao M, Sheng K. Multimodality image registration in the head-and-neck using a deep learning-derived synthetic CT as a bridge. Med Phys 2020; 47:1094-1104. [PMID: 31853975 PMCID: PMC7067662 DOI: 10.1002/mp.13976] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 11/28/2019] [Accepted: 12/10/2019] [Indexed: 11/08/2022] Open
Abstract
PURPOSE To develop and demonstrate the efficacy of a novel head-and-neck multimodality image registration technique using deep-learning-based cross-modality synthesis. METHODS AND MATERIALS Twenty-five head-and-neck patients received magnetic resonance (MR) and computed tomography (CT) (CTaligned ) scans on the same day with the same immobilization. Fivefold cross validation was used with all of the MR-CT pairs to train a neural network to generate synthetic CTs from MR images. Twenty-four of 25 patients also had a separate CT without immobilization (CTnon-aligned ) and were used for testing. CTnon-aligned 's were deformed to the synthetic CT, and compared to CTnon-aligned registered to MR. The same registrations were performed from MR to CTnon-aligned and from synthetic CT to CTnon-aligned . All registrations used B-splines for modeling the deformation, and mutual information for the objective. Results were evaluated using the 95% Hausdorff distance among spinal cord contours, landmark error, inverse consistency, and Jacobian determinant of the estimated deformation fields. RESULTS When large initial rigid misalignment is present, registering CT to MRI-derived synthetic CT aligns the cord better than a direct registration. The average landmark error decreased from 9.8 ± 3.1 mm in MR→CTnon-aligned to 6.0 ± 2.1 mm in CTsynth →CTnon-aligned deformable registrations. In the CT to MR direction, the landmark error decreased from 10.0 ± 4.3 mm in CTnon-aligned →MR deformable registrations to 6.6 ± 2.0 mm in CTnon-aligned →CTsynth deformable registrations. The Jacobian determinant had an average value of 0.98. The proposed method also demonstrated improved inverse consistency over the direct method. CONCLUSIONS We showed that using a deep learning-derived synthetic CT in lieu of an MR for MR→CT and CT→MR deformable registration offers superior results to direct multimodal registration.
Collapse
Affiliation(s)
- Elizabeth M McKenzie
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, 90024, USA
| | - Anand Santhanam
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, 90024, USA
| | - Dan Ruan
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, 90024, USA
| | - Daniel O'Connor
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, 90024, USA
| | - Minsong Cao
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, 90024, USA
| | - Ke Sheng
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, 90024, USA
| |
Collapse
|
282
|
Velázquez-Durán MJ, Campos-Delgado DU, Arce-Santana ER, Mejía-Rodríguez AR. Multimodal 3D rigid image registration based on expectation maximization. HEALTH AND TECHNOLOGY 2020. [DOI: 10.1007/s12553-019-00353-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
283
|
Medication-Related Osteonecrosis of the Jaw—Comparison of Bone Imaging Using Ultrashort Echo-Time Magnetic Resonance Imaging and Cone-Beam Computed Tomography. Invest Radiol 2020; 55:160-167. [DOI: 10.1097/rli.0000000000000617] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
284
|
Probabilistic Learning Coherent Point Drift for 3D Ultrasound Fetal Head Registration. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:4271519. [PMID: 32089729 PMCID: PMC7013355 DOI: 10.1155/2020/4271519] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 12/04/2019] [Indexed: 11/18/2022]
Abstract
Quantification of brain growth is crucial for the assessment of fetal well being, for which ultrasound (US) images are the chosen clinical modality. However, they present artefacts, such as acoustic occlusion, especially after the 18th gestational week, when cranial calcification appears. Fetal US volume registration is useful in one or all of the following cases: to monitor the evolution of fetometry indicators, to segment different structures using a fetal brain atlas, and to align and combine multiple fetal brain acquisitions. This paper presents a new approach for automatic registration of real 3D US fetal brain volumes, volumes that contain a considerable degree of occlusion artefacts, noise, and missing data. To achieve this, a novel variant of the coherent point drift method is proposed. This work employs supervised learning to segment and conform a point cloud automatically and to estimate their subsequent weight factors. These factors are obtained by a random forest-based classification and are used to appropriately assign nonuniform membership probability values of a Gaussian mixture model. These characteristics allow for the automatic registration of 3D US fetal brain volumes with occlusions and multiplicative noise, without needing an initial point cloud. Compared to other intensity and geometry-based algorithms, the proposed method achieves an error reduction of 7.4% to 60.7%, with a target registration error of only 6.38 ± 3.24 mm. This makes the herein proposed approach highly suitable for 3D automatic registration of fetal head US volumes, an approach which can be useful to monitor fetal growth, segment several brain structures, or even compound multiple acquisitions taken from different projections.
Collapse
|
285
|
Pan L, Shi F, Xiang D, Yu K, Duan L, Zheng J, Chen X. OCTRexpert:A Feature-based 3D Registration Method for Retinal OCT Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:3885-3897. [PMID: 31995490 DOI: 10.1109/tip.2020.2967589] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Medical image registration can be used for studying longitudinal and cross-sectional data, quantitatively monitoring disease progression and guiding computer assisted diagnosis and treatments. However, deformable registration which enables more precise and quantitative comparison has not been well developed for retinal optical coherence tomography (OCT) images. This paper proposes a new 3D registration approach for retinal OCT data called OCTRexpert. To the best of our knowledge, the proposed algorithm is the first full 3D registration approach for retinal OCT images which can be applied to longitudinal OCT images for both normal and serious pathological subjects. In this approach, a pre-processing method is first performed to remove eye motion artifact and then a novel design-detection-deformation strategy is applied for the registration. In the design step, a couple of features are designed for each voxel in the image. In the detection step, active voxels are selected and the point-to-point correspondences between the subject and template images are established. In the deformation step, the image is hierarchically deformed according to the detected correspondences in multi-resolution. The proposed method is evaluated on a dataset with longitudinal OCT images from 20 healthy subjects and 4 subjects diagnosed with serious Choroidal Neovascularization (CNV). Experimental results show that the proposed registration algorithm consistently yields statistically significant improvements in both Dice similarity coefficient and the average unsigned surface error compared with the other registration methods.
Collapse
|
286
|
Zachiu C, de Senneville BD, Raaymakers BW, Ries M. Biomechanical quality assurance criteria for deformable image registration algorithms used in radiotherapy guidance. ACTA ACUST UNITED AC 2020; 65:015006. [DOI: 10.1088/1361-6560/ab501d] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
287
|
Špiclin Ž, McClelland J, Kybic J, Goksel O. Learning-Based Affine Registration of Histological Images. BIOMEDICAL IMAGE REGISTRATION 2020. [PMCID: PMC7279928 DOI: 10.1007/978-3-030-50120-4_2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
The use of different stains for histological sample preparation reveals distinct tissue properties and may result in a more accurate diagnosis. However, as a result of the staining process, the tissue slides are being deformed and registration is required before further processing. The importance of this problem led to organizing an open challenge named Automatic Non-rigid Histological Image Registration Challenge (ANHIR), organized jointly with the IEEE ISBI 2019 conference. The challenge organizers provided several hundred image pairs and a server-side evaluation platform. One of the most difficult sub-problems for the challenge participants was to find an initial, global transform, before attempting to calculate the final, non-rigid deformation field. This article solves the problem by proposing a deep network trained in an unsupervised way with a good generalization. We propose a method that works well for images with different resolutions, aspect ratios, without the necessity to perform image padding, while maintaining a low number of network parameters and fast forward pass time. The proposed method is orders of magnitude faster than the classical approach based on the iterative similarity metric optimization or computer vision descriptors. The success rate is above 98% for both the training set and the evaluation set. We make both the training and inference code freely available.
Collapse
Affiliation(s)
- Žiga Špiclin
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | - Jamie McClelland
- Centre for Medical Image Computing, University College London, London, UK
| | - Jan Kybic
- Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czech Republic
| | - Orcun Goksel
- Computer Vision Lab, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
288
|
Moccia S, Romeo L, Migliorelli L, Frontoni E, Zingaretti P. Supervised CNN Strategies for Optical Image Segmentation and Classification in Interventional Medicine. INTELLIGENT SYSTEMS REFERENCE LIBRARY 2020. [DOI: 10.1007/978-3-030-42750-4_8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
289
|
Multi-channel Image Registration of Cardiac MR Using Supervised Feature Learning with Convolutional Encoder-Decoder Network. BIOMEDICAL IMAGE REGISTRATION 2020. [PMCID: PMC7279923 DOI: 10.1007/978-3-030-50120-4_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
It is difficult to register the images involving large deformation and intensity inhomogeneity. In this paper, a new multi-channel registration algorithm using modified multi-feature mutual information (α-MI) based on minimal spanning tree (MST) is presented. First, instead of relying on handcrafted features, a convolutional encoder-decoder network is employed to learn the latent feature representation from cardiac MR images. Second, forward computation and backward propagation are performed in a supervised fashion to make the learned features more discriminative. Finally, local features containing appearance information is extracted and integrated into α-MI for achieving multi-channel registration. The proposed method has been evaluated on cardiac cine-MRI data from 100 patients. The experimental results show that features learned from deep network are more effective than handcrafted features in guiding intra-subject registration of cardiac MR images.
Collapse
|
290
|
Davatzikos C, Sotiras A, Fan Y, Habes M, Erus G, Rathore S, Bakas S, Chitalia R, Gastounioti A, Kontos D. Precision diagnostics based on machine learning-derived imaging signatures. Magn Reson Imaging 2019; 64:49-61. [PMID: 31071473 PMCID: PMC6832825 DOI: 10.1016/j.mri.2019.04.012] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Revised: 04/24/2019] [Accepted: 04/29/2019] [Indexed: 01/08/2023]
Abstract
The complexity of modern multi-parametric MRI has increasingly challenged conventional interpretations of such images. Machine learning has emerged as a powerful approach to integrating diverse and complex imaging data into signatures of diagnostic and predictive value. It has also allowed us to progress from group comparisons to imaging biomarkers that offer value on an individual basis. We review several directions of research around this topic, emphasizing the use of machine learning in personalized predictions of clinical outcome, in breaking down broad umbrella diagnostic categories into more detailed and precise subtypes, and in non-invasively estimating cancer molecular characteristics. These methods and studies contribute to the field of precision medicine, by introducing more specific diagnostic and predictive biomarkers of clinical outcome, therefore pointing to better matching of treatments to patients.
Collapse
Affiliation(s)
- Christos Davatzikos
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America.
| | - Aristeidis Sotiras
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Yong Fan
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Mohamad Habes
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Guray Erus
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Saima Rathore
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Rhea Chitalia
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Aimilia Gastounioti
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Despina Kontos
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| |
Collapse
|
291
|
Tustison NJ, Avants BB, Gee JC. Learning image-based spatial transformations via convolutional neural networks: A review. Magn Reson Imaging 2019; 64:142-153. [DOI: 10.1016/j.mri.2019.05.037] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 05/22/2019] [Accepted: 05/26/2019] [Indexed: 12/18/2022]
|
292
|
Fan J, Cao X, Wang Q, Yap PT, Shen D. Adversarial learning for mono- or multi-modal registration. Med Image Anal 2019; 58:101545. [PMID: 31557633 PMCID: PMC7455790 DOI: 10.1016/j.media.2019.101545] [Citation(s) in RCA: 70] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2019] [Revised: 06/16/2019] [Accepted: 08/19/2019] [Indexed: 11/29/2022]
Abstract
This paper introduces an unsupervised adversarial similarity network for image registration. Unlike existing deep learning registration methods, our approach can train a deformable registration network without the need of ground-truth deformations and specific similarity metrics. We connect a registration network and a discrimination network with a deformable transformation layer. The registration network is trained with the feedback from the discrimination network, which is designed to judge whether a pair of registered images are sufficiently similar. Using adversarial training, the registration network is trained to predict deformations that are accurate enough to fool the discrimination network. The proposed method is thus a general registration framework, which can be applied for both mono-modal and multi-modal image registration. Experiments on four brain MRI datasets and a multi-modal pelvic image dataset indicate that our method yields promising registration performance in accuracy, efficiency and generalizability compared with state-of-the-art registration methods, including those based on deep learning.
Collapse
Affiliation(s)
- Jingfan Fan
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Xiaohuan Cao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Qian Wang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
293
|
Kubicek J, Tomanec F, Cerny M, Vilimek D, Kalova M, Oczka D. Recent Trends, Technical Concepts and Components of Computer-Assisted Orthopedic Surgery Systems: A Comprehensive Review. SENSORS (BASEL, SWITZERLAND) 2019; 19:E5199. [PMID: 31783631 PMCID: PMC6929084 DOI: 10.3390/s19235199] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 11/08/2019] [Accepted: 11/12/2019] [Indexed: 12/17/2022]
Abstract
Computer-assisted orthopedic surgery (CAOS) systems have become one of the most important and challenging types of system in clinical orthopedics, as they enable precise treatment of musculoskeletal diseases, employing modern clinical navigation systems and surgical tools. This paper brings a comprehensive review of recent trends and possibilities of CAOS systems. There are three types of the surgical planning systems, including: systems based on the volumetric images (computer tomography (CT), magnetic resonance imaging (MRI) or ultrasound images), further systems utilize either 2D or 3D fluoroscopic images, and the last one utilizes the kinetic information about the joints and morphological information about the target bones. This complex review is focused on three fundamental aspects of CAOS systems: their essential components, types of CAOS systems, and mechanical tools used in CAOS systems. In this review, we also outline the possibilities for using ultrasound computer-assisted orthopedic surgery (UCAOS) systems as an alternative to conventionally used CAOS systems.
Collapse
Affiliation(s)
- Jan Kubicek
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, FEECS, 708 00 Ostrava-Poruba, Czech Republic; (F.T.); (M.C.); (D.V.); (M.K.); (D.O.)
| | | | | | | | | | | |
Collapse
|
294
|
Xing Q, Chitnis P, Sikdar S, Alshiek J, Shobeiri SA, Wei Q. M3VR-A multi-stage, multi-resolution, and multi-volumes-of-interest volume registration method applied to 3D endovaginal ultrasound. PLoS One 2019; 14:e0224583. [PMID: 31751356 PMCID: PMC6872108 DOI: 10.1371/journal.pone.0224583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Accepted: 10/16/2019] [Indexed: 11/24/2022] Open
Abstract
Heterogeneity of echo-texture and lack of sharply delineated tissue boundaries in diagnostic ultrasound images make three-dimensional (3D) registration challenging, especially when the volumes to be registered are considerably different due to local changes. We implemented a novel computational method that optimally registers volumetric ultrasound image data containing significant and local anatomical differences. It is A Multi-stage, Multi-resolution, and Multi-volumes-of-interest Volume Registration Method. A single region registration is optimized first for a close initial alignment to avoid convergence to a locally optimal solution. Multiple sub-volumes of interest can then be selected as target alignment regions to achieve confident consistency across the volume. Finally, a multi-resolution rigid registration is performed on these sub-volumes associated with different weights in the cost function. We applied the method on 3D endovaginal ultrasound image data acquired from patients during biopsy procedure of the pelvic floor muscle. Systematic assessment of our proposed method through cross validation demonstrated its accuracy and robustness. The algorithm can also be applied on medical imaging data of other modalities for which the traditional rigid registration methods would fail.
Collapse
Affiliation(s)
- Qi Xing
- Department of Computer Science, George Mason University, Fairfax, Virginia, United States of America
- The School of Information Science and Technology, Southwest Jiaotong University, Sichuan, China
| | - Parag Chitnis
- Department of Bioengineering, George Mason University, Fairfax, Virginia, United States of America
| | - Siddhartha Sikdar
- Department of Bioengineering, George Mason University, Fairfax, Virginia, United States of America
| | - Jonia Alshiek
- Department of Obstetrics & Gynecology, INOVA Health System, Falls Church, Virginia, United States of America
| | - S. Abbas Shobeiri
- Department of Bioengineering, George Mason University, Fairfax, Virginia, United States of America
- Department of Obstetrics & Gynecology, INOVA Health System, Falls Church, Virginia, United States of America
| | - Qi Wei
- Department of Bioengineering, George Mason University, Fairfax, Virginia, United States of America
| |
Collapse
|
295
|
Lingala SG, Guo Y, Bliesener Y, Zhu Y, Lebel RM, Law M, Nayak KS. Tracer kinetic models as temporal constraints during brain tumor DCE-MRI reconstruction. Med Phys 2019; 47:37-51. [PMID: 31663134 PMCID: PMC6980286 DOI: 10.1002/mp.13885] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Revised: 10/17/2019] [Accepted: 10/17/2019] [Indexed: 12/11/2022] Open
Abstract
Purpose To apply tracer kinetic models as temporal constraints during reconstruction of under‐sampled brain tumor dynamic contrast enhanced (DCE) magnetic resonance imaging (MRI). Methods A library of concentration vs time profiles is simulated for a range of physiological kinetic parameters. The library is reduced to a dictionary of temporal bases, where each profile is approximated by a sparse linear combination of the bases. Image reconstruction is formulated as estimation of concentration profiles and sparse model coefficients with a fixed sparsity level. Simulations are performed to evaluate modeling error, and error statistics in kinetic parameter estimation in presence of noise. Retrospective under‐sampling experiments are performed on a brain tumor DCE digital reference object (DRO), and 12 brain tumor in‐vivo 3T datasets. The performances of the proposed under‐sampled reconstruction scheme and an existing compressed sensing‐based temporal finite‐difference (tFD) under‐sampled reconstruction were compared against the fully sampled inverse Fourier Transform‐based reconstruction. Results Simulations demonstrate that sparsity levels of 2 and 3 model the library profiles from the Patlak and extended Tofts‐Kety (ETK) models, respectively. Noise sensitivity analysis showed equivalent kinetic parameter estimation error statistics from noisy concentration profiles, and model approximated profiles. DRO‐based experiments showed good fidelity in recovery of kinetic maps from 20‐fold under‐sampled data. In‐vivo experiments demonstrated reduced bias and uncertainty in kinetic mapping with the proposed approach compared to tFD at under‐sampled reduction factors >= 20. Conclusions Tracer kinetic models can be applied as temporal constraints during brain tumor DCE‐MRI reconstruction. The proposed under‐sampled scheme resulted in model parameter estimates less biased with respect to conventional fully sampled DCE MRI reconstructions and parameter estimation. The approach is flexible, can use nonlinear kinetic models, and does not require tuning of regularization parameters.
Collapse
Affiliation(s)
- Sajan Goud Lingala
- Roy J Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA, USA
| | - Yi Guo
- Snap Inc., San Francisco, CA, USA
| | - Yannick Bliesener
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | | | - R Marc Lebel
- GE Healthcare Applied Sciences Laboratory, Calgary, Canada
| | - Meng Law
- Department of Neuroscience, Monash University, Melbourne, Australia
| | - Krishna S Nayak
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
296
|
Blendowski M, Bouteldja N, Heinrich MP. Multimodal 3D medical image registration guided by shape encoder-decoder networks. Int J Comput Assist Radiol Surg 2019; 15:269-276. [PMID: 31741286 DOI: 10.1007/s11548-019-02089-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Accepted: 11/04/2019] [Indexed: 01/03/2023]
Abstract
PURPOSE Nonlinear multimodal image registration, for example, the fusion of computed tomography (CT) and magnetic resonance imaging (MRI), fundamentally depends on a definition of image similarity. Previous methods that derived modality-invariant representations focused on either global statistical grayscale relations or local structural similarity, both of which are prone to local optima. In contrast to most learning-based methods that rely on strong supervision of aligned multimodal image pairs, we aim to overcome this limitation for further practical use cases. METHODS We propose a new concept that exploits anatomical shape information and requires only segmentation labels for both modalities individually. First, a shape-constrained encoder-decoder segmentation network without skip connections is jointly trained on labeled CT and MRI inputs. Second, an iterative energy-based minimization scheme is introduced that relies on the capability of the network to generate intermediate nonlinear shape representations. This further eases the multimodal alignment in the case of large deformations. RESULTS Our novel approach robustly and accurately aligns 3D scans from the multimodal whole-heart segmentation dataset, outperforming classical unsupervised frameworks. Since both parts of our method rely on (stochastic) gradient optimization, it can be easily integrated in deep learning frameworks and executed on GPUs. CONCLUSIONS We present an integrated approach for weakly supervised multimodal image registration. Achieving promising results due to the exploration of intermediate shape features as registration guidance encourages further research in this direction.
Collapse
Affiliation(s)
- Max Blendowski
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany.
| | - Nassim Bouteldja
- Institute of Imaging and Computer Vision, RWTH Aachen University, Templergraben 55, 52056, Aachen, Germany
| | - Mattias P Heinrich
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
| |
Collapse
|
297
|
Machado I, Toews M, George E, Unadkat P, Essayed W, Luo J, Teodoro P, Carvalho H, Martins J, Golland P, Pieper S, Frisken S, Golby A, Wells Iii W, Ou Y. Deformable MRI-Ultrasound registration using correlation-based attribute matching for brain shift correction: Accuracy and generality in multi-site data. Neuroimage 2019; 202:116094. [PMID: 31446127 PMCID: PMC6819249 DOI: 10.1016/j.neuroimage.2019.116094] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 07/18/2019] [Accepted: 08/09/2019] [Indexed: 11/16/2022] Open
Abstract
Intraoperative tissue deformation, known as brain shift, decreases the benefit of using preoperative images to guide neurosurgery. Non-rigid registration of preoperative magnetic resonance (MR) to intraoperative ultrasound (iUS) has been proposed as a means to compensate for brain shift. We focus on the initial registration from MR to predurotomy iUS. We present a method that builds on previous work to address the need for accuracy and generality of MR-iUS registration algorithms in multi-site clinical data. High-dimensional texture attributes were used instead of image intensities for image registration and the standard difference-based attribute matching was replaced with correlation-based attribute matching. A strategy that deals explicitly with the large field-of-view mismatch between MR and iUS images was proposed. Key parameters were optimized across independent MR-iUS brain tumor datasets acquired at 3 institutions, with a total of 43 tumor patients and 758 reference landmarks for evaluating the accuracy of the proposed algorithm. Despite differences in imaging protocols, patient demographics and landmark distributions, the algorithm is able to reduce landmark errors prior to registration in three data sets (5.37±4.27, 4.18±1.97 and 6.18±3.38 mm, respectively) to a consistently low level (2.28±0.71, 2.08±0.37 and 2.24±0.78 mm, respectively). This algorithm was tested against 15 other algorithms and it is competitive with the state-of-the-art on multiple datasets. We show that the algorithm has one of the lowest errors in all datasets (accuracy), and this is achieved while sticking to a fixed set of parameters for multi-site data (generality). In contrast, other algorithms/tools of similar performance need per-dataset parameter tuning (high accuracy but lower generality), and those that stick to fixed parameters have larger errors or inconsistent performance (generality but not the top accuracy). Landmark errors were further characterized according to brain regions and tumor types, a topic so far missing in the literature.
Collapse
Affiliation(s)
- Inês Machado
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Mechanical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal.
| | - Matthew Toews
- Department of Systems Engineering, École de Technologie Supérieure, Montreal, Canada
| | - Elizabeth George
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Prashin Unadkat
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Walid Essayed
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Jie Luo
- Graduate School of Frontier Sciences, University of Tokyo, Tokyo, Japan
| | - Pedro Teodoro
- Escola Superior Náutica Infante D. Henrique, Lisbon, Portugal
| | - Herculano Carvalho
- Department of Neurosurgery, Hospital de Santa Maria, CHLN, Lisbon, Portugal
| | - Jorge Martins
- Department of Mechanical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Steve Pieper
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Isomics, Inc., Cambridge, MA, USA
| | - Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Alexandra Golby
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - William Wells Iii
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Yangming Ou
- Department of Pediatrics and Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
298
|
Sun L, Shao W, Wang M, Zhang D, Liu M. High-order Feature Learning for Multi-atlas based Label Fusion: Application to Brain Segmentation with MRI. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2702-2713. [PMID: 31725379 DOI: 10.1109/tip.2019.2952079] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multi-atlas based segmentation methods have shown their effectiveness in brain regions-of-interesting (ROIs) segmentation, by propagating labels from multiple atlases to a target image based on the similarity between patches in the target image and multiple atlas images. Most of the existing multiatlas based methods use image intensity features to calculate the similarity between a pair of image patches for label fusion. In particular, using only low-level image intensity features cannot adequately characterize the complex appearance patterns (e.g., the high-order relationship between voxels within a patch) of brain magnetic resonance (MR) images. To address this issue, this paper develops a high-order feature learning framework for multi-atlas based label fusion, where high-order features of image patches are extracted and fused for segmenting ROIs of structural brain MR images. Specifically, an unsupervised feature learning method (i.e., means-covariances restricted Boltzmann machine, mcRBM) is employed to learn high-order features (i.e., mean and covariance features) of patches in brain MR images. Then, a group-fused sparsity dictionary learning method is proposed to jointly calculate the voting weights for label fusion, based on the learned high-order and the original image intensity features. The proposed method is compared with several state-of-the-art label fusion methods on ADNI, NIREP and LONI-LPBA40 datasets. The Dice ratio achieved by our method is 88:30%, 88:83%, 79:54% and 81:02% on left and right hippocampus on the ADNI, NIREP and LONI-LPBA40 datasets, respectively, while the best Dice ratio yielded by the other methods are 86:51%, 87:39%, 78:48% and 79:65% on three datasets, respectively.
Collapse
|
299
|
An Efficient Implementation of Deep Convolutional Neural Networks for MRI Segmentation. J Digit Imaging 2019; 31:738-747. [PMID: 29488179 DOI: 10.1007/s10278-018-0062-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Image segmentation is one of the most common steps in digital image processing, classifying a digital image into different segments. The main goal of this paper is to segment brain tumors in magnetic resonance images (MRI) using deep learning. Tumors having different shapes, sizes, brightness and textures can appear anywhere in the brain. These complexities are the reasons to choose a high-capacity Deep Convolutional Neural Network (DCNN) containing more than one layer. The proposed DCNN contains two parts: architecture and learning algorithms. The architecture and the learning algorithms are used to design a network model and to optimize parameters for the network training phase, respectively. The architecture contains five convolutional layers, all using 3 × 3 kernels, and one fully connected layer. Due to the advantage of using small kernels with fold, it allows making the effect of larger kernels with smaller number of parameters and fewer computations. Using the Dice Similarity Coefficient metric, we report accuracy results on the BRATS 2016, brain tumor segmentation challenge dataset, for the complete, core, and enhancing regions as 0.90, 0.85, and 0.84 respectively. The learning algorithm includes the task-level parallelism. All the pixels of an MR image are classified using a patch-based approach for segmentation. We attain a good performance and the experimental results show that the proposed DCNN increases the segmentation accuracy compared to previous techniques.
Collapse
|
300
|
Reducing non-realistic deformations in registration using precise and reliable landmark correspondences. Comput Biol Med 2019; 115:103515. [PMID: 31698233 DOI: 10.1016/j.compbiomed.2019.103515] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 10/15/2019] [Accepted: 10/16/2019] [Indexed: 11/22/2022]
Abstract
Non-rigid image registration is prone to non-realistic deformations. In this paper, we proposed a novel landmark-correspondence detection algorithm, with which, the non-realistic deformations in image registration can be reduced. Our method consists of the following steps. First, the landmarks in the reference image are extracted by a corner detector. Then the landmarks are transferred to the template image by the proposed Multiscale Local Rigid Matching (MsLRM) algorithm. A two-stage method is designed for outlier removal before the landmark correspondences are incorporated into a FFD-based registration through a penalty term considering that the interpolating splines in FFD are highly sensitive to outliers. The proposed method was validated on both simulated images and real-world clinical lung dynamic contrast-enhanced magnetic resonance images. The results showed that the proposed MsLRM achieved sub-pixel accuracy, and was robust to local contrast changes. On clinical datasets, the MsLRM-based landmark-constrained registration improved the registration accuracy by at least 25%, compared with the state-of-the-art registration methods. It achieved an average expert landmark distance of 0.23 mm, close to the inter-observer variability of 0.17 mm. We conclude that our novel landmark-constrained registration improves registration performance on dynamic medical images and outperforms the state-of-the-art registration methods.
Collapse
|