301
|
Schipaanboord B, Boukerroui D, Peressutti D, van Soest J, Lustberg T, Dekker A, Elmpt WV, Gooding MJ. An Evaluation of Atlas Selection Methods for Atlas-Based Automatic Segmentation in Radiotherapy Treatment Planning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2654-2664. [PMID: 30969918 DOI: 10.1109/tmi.2019.2907072] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Atlas-based automatic segmentation is used in radiotherapy planning to accelerate the delineation of organs at risk (OARs). Atlas selection has been proposed as a way to improve the accuracy and execution time of segmentation, assuming that, the more similar the atlas is to the patient, the better the results will be. This paper presents an analysis of atlas selection methods in the context of radiotherapy treatment planning. For a range of commonly contoured OARs, a thorough comparison of a large class of typical atlas selection methods has been performed. For this evaluation, clinically contoured CT images of the head and neck ( N=316 ) and thorax ( N=280 ) were used. The state-of-the-art intensity and deformation similarity-based atlas selection methods were found to compare poorly to perfect atlas selection. Counter-intuitively, atlas selection methods based on a fixed set of representative atlases outperformed atlas selection methods based on the patient image. This study suggests that atlas-based segmentation with currently available selection methods compares poorly to the potential best performance, hampering the clinical utility of atlas-based segmentation. Effective atlas selection remains an open challenge in atlas-based segmentation for radiotherapy planning.
Collapse
|
302
|
Yang F, Ding M, Zhang X. Non-Rigid Multi-Modal 3D Medical Image Registration Based on Foveated Modality Independent Neighborhood Descriptor. SENSORS 2019; 19:s19214675. [PMID: 31661828 PMCID: PMC6864520 DOI: 10.3390/s19214675] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Revised: 10/05/2019] [Accepted: 10/23/2019] [Indexed: 11/22/2022]
Abstract
The non-rigid multi-modal three-dimensional (3D) medical image registration is highly challenging due to the difficulty in the construction of similarity measure and the solution of non-rigid transformation parameters. A novel structural representation based registration method is proposed to address these problems. Firstly, an improved modality independent neighborhood descriptor (MIND) that is based on the foveated nonlocal self-similarity is designed for the effective structural representations of 3D medical images to transform multi-modal image registration into mono-modal one. The sum of absolute differences between structural representations is computed as the similarity measure. Subsequently, the foveated MIND based spatial constraint is introduced into the Markov random field (MRF) optimization to reduce the number of transformation parameters and restrict the calculation of the energy function in the image region involving non-rigid deformation. Finally, the accurate and efficient 3D medical image registration is realized by minimizing the similarity measure based MRF energy function. Extensive experiments on 3D positron emission tomography (PET), computed tomography (CT), T1, T2, and (proton density) PD weighted magnetic resonance (MR) images with synthetic deformation demonstrate that the proposed method has higher computational efficiency and registration accuracy in terms of target registration error (TRE) than the registration methods that are based on the hybrid L-BFGS-B and cat swarm optimization (HLCSO), the sum of squared differences on entropy images, the MIND, and the self-similarity context (SSC) descriptor, except that it provides slightly bigger TRE than the HLCSO for CT-PET image registration. Experiments on real MR and ultrasound images with unknown deformation have also be done to demonstrate the practicality and superiority of the proposed method.
Collapse
Affiliation(s)
- Feng Yang
- Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
- School of Computer and Electronics and Information, Guangxi University, Nanning 530004, China.
| | - Mingyue Ding
- Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Xuming Zhang
- Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
| |
Collapse
|
303
|
Mang A, Gholami A, Davatzikos C, Biros G. CLAIRE: A DISTRIBUTED-MEMORY SOLVER FOR CONSTRAINED LARGE DEFORMATION DIFFEOMORPHIC IMAGE REGISTRATION. SIAM JOURNAL ON SCIENTIFIC COMPUTING : A PUBLICATION OF THE SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS 2019; 41:C548-C584. [PMID: 34650324 PMCID: PMC8513530 DOI: 10.1137/18m1207818] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
With this work we release CLAIRE, a distributed-memory implementation of an effective solver for constrained large deformation diifeomorphic image registration problems in three dimensions. We consider an optimal control formulation. We invert for a stationary velocity field that parameterizes the deformation map. Our solver is based on a globalized, preconditioned, inexact reduced space Gauss‒Newton‒Krylov scheme. We exploit state-of-the-art techniques in scientific computing to develop an eifective solver that scales to thousands of distributed memory nodes on high-end clusters. We present the formulation, discuss algorithmic features, describe the software package, and introduce an improved preconditioner for the reduced space Hessian to speed up the convergence of our solver. We test registration performance on synthetic and real data. We Demonstrate registration accuracy on several neuroimaging datasets. We compare the performance of our scheme against diiferent flavors of the Demons algorithm for diifeomorphic image registration. We study convergence of our preconditioner and our overall algorithm. We report scalability results on state-of-the-art supercomputing platforms. We Demonstrate that we can solve registration problems for clinically relevant data sizes in two to four minutes on a standard compute node with 20 cores, attaining excellent data fidelity. With the present work we achieve a speedup of (on average) 5× with a peak performance of up to 17× compared to our former work.
Collapse
Affiliation(s)
- Andreas Mang
- Department of Mathematics, University of Houston, Houston, TX 77204-5008
| | - Amir Gholami
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720-1770
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104-2643
| | - George Biros
- Oden Institute for Computational Engineering and Sciences, University of Texas at Austin, Austin, TX 78712-1229
| |
Collapse
|
304
|
Qiu C, Peng W, Wang Y, Hong J, Xia S. Fusion of mis-registered GFP and phase contrast images with convolutional sparse representation and adaptive region energy rule. Microsc Res Tech 2019; 83:35-47. [PMID: 31612603 DOI: 10.1002/jemt.23385] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 08/30/2019] [Accepted: 09/15/2019] [Indexed: 11/11/2022]
Abstract
Biomedical image fusion is the process of combining the information from different imaging modalities to get a synthetic image. Fusion of phase contrast and green fluorescent protein (GFP) images is significant to predict the role of unknown proteins, analyze the function of proteins, locate the subcellular structure, and so forth. Generally, the fusion performance largely depends on the registration of GFP and phase contrast images. However, accurate registration of multi-modal images is a very challenging task. Hence, we propose a novel fusion method based on convolutional sparse representation (CSR) to fuse the mis-registered GFP and phase contrast images. At first, the GFP and phase contrast images are decomposed by CSR to get the coefficients of base layers and detail layers. Secondly, the coefficients of detail layers are fused by the sum modified Laplacian (SML) rule while the coefficients of base layers are fused by the proposed adaptive region energy (ARE) rule. ARE rule is calculated by discussion mechanism based brain storm optimization (DMBSO) algorithm. Finally, the fused image is achieved by carrying out the inverse CSR. The proposed fusion method is tested on 100 pairs of mis-registered GFP and phase contrast images. The experimental results reveal that our proposed fusion method exhibits better fusion results and superior robustness than several existing fusion methods.
Collapse
Affiliation(s)
- Chenhui Qiu
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, China.,Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, China
| | - Wenxian Peng
- College of Medical Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| | - Yuanyuan Wang
- School of Information & Electrical Engineering, Zhejiang University City College, Hangzhou, China
| | - Jiang Hong
- Department of Radiology, Hangzhou Hospital Zhejiang Armed Police Corps, Hangzhou, China
| | - Shunren Xia
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, China.,Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, China.,School of Information & Electrical Engineering, Zhejiang University City College, Hangzhou, China
| |
Collapse
|
305
|
Miller K, Joldes GR, Bourantas G, Warfield S, Hyde DE, Kikinis R, Wittek A. Biomechanical modeling and computer simulation of the brain during neurosurgery. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2019; 35:e3250. [PMID: 31400252 PMCID: PMC6785376 DOI: 10.1002/cnm.3250] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Revised: 06/28/2019] [Accepted: 08/08/2019] [Indexed: 06/10/2023]
Abstract
Computational biomechanics of the brain for neurosurgery is an emerging area of research recently gaining in importance and practical applications. This review paper presents the contributions of the Intelligent Systems for Medicine Laboratory and its collaborators to this field, discussing the modeling approaches adopted and the methods developed for obtaining the numerical solutions. We adopt a physics-based modeling approach and describe the brain deformation in mechanical terms (such as displacements, strains, and stresses), which can be computed using a biomechanical model, by solving a continuum mechanics problem. We present our modeling approaches related to geometry creation, boundary conditions, loading, and material properties. From the point of view of solution methods, we advocate the use of fully nonlinear modeling approaches, capable of capturing very large deformations and nonlinear material behavior. We discuss finite element and meshless domain discretization, the use of the total Lagrangian formulation of continuum mechanics, and explicit time integration for solving both time-accurate and steady-state problems. We present the methods developed for handling contacts and for warping 3D medical images using the results of our simulations. We present two examples to showcase these methods: brain shift estimation for image registration and brain deformation computation for neuronavigation in epilepsy treatment.
Collapse
Affiliation(s)
- K. Miller
- Intelligent Systems for Medicine Laboratory, Department of Mechanical Engineering, The University of Western Australia, 35 Stirling Highway, Perth, WA 6009, Australia
| | - G. R. Joldes
- Intelligent Systems for Medicine Laboratory, Department of Mechanical Engineering, The University of Western Australia, 35 Stirling Highway, Perth, WA 6009, Australia
| | - G. Bourantas
- Intelligent Systems for Medicine Laboratory, Department of Mechanical Engineering, The University of Western Australia, 35 Stirling Highway, Perth, WA 6009, Australia
| | - S.K. Warfield
- Computational Radiology Laboratory, Department of Radiology, Boston Children’s Hospital and Harvard Medical School, 300 Longwood Avenue, Boston MA 02115
| | - D. E. Hyde
- Computational Radiology Laboratory, Department of Radiology, Boston Children’s Hospital and Harvard Medical School, 300 Longwood Avenue, Boston MA 02115
| | - R. Kikinis
- Surgical Planning Laboratory, Brigham and Women’s Hospital and Harvard Medical School, 45 Francis St, Boston, MA 02115
- Medical Image Computing, University of Bremen, Germany
- Fraunhofer MEVIS, Bremen, Germany
| | - A. Wittek
- Intelligent Systems for Medicine Laboratory, Department of Mechanical Engineering, The University of Western Australia, 35 Stirling Highway, Perth, WA 6009, Australia
| |
Collapse
|
306
|
Fu Y, Wu X, Thomas AM, Li HH, Yang D. Automatic large quantity landmark pairs detection in 4DCT lung images. Med Phys 2019; 46:4490-4501. [PMID: 31318989 PMCID: PMC8311742 DOI: 10.1002/mp.13726] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Revised: 06/20/2019] [Accepted: 07/11/2019] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To automatically and precisely detect a large quantity of landmark pairs between two lung computed tomography (CT) images to support evaluation of deformable image registration (DIR). We expect that the generated landmark pairs will significantly augment the current lung CT benchmark datasets in both quantity and positional accuracy. METHODS A large number of landmark pairs were detected within the lung between the end-exhalation (EE) and end-inhalation (EI) phases of the lung four-dimensional computed tomography (4DCT) datasets. Thousands of landmarks were detected by applying the Harris-Stephens corner detection algorithm on the probability maps of the lung vasculature tree. A parametric image registration method (pTVreg) was used to establish initial landmark correspondence by registering the images at EE and EI phases. A multi-stream pseudo-siamese (MSPS) network was then developed to further improve the landmark pair positional accuracy by directly predicting three-dimensional (3D) shifts to optimally align the landmarks in EE to their counterparts in EI. Positional accuracies of the detected landmark pairs were evaluated using both digital phantoms and publicly available landmark pairs. RESULTS Dense sets of landmark pairs were detected for 10 4DCT lung datasets, with an average of 1886 landmark pairs per case. The mean and standard deviation of target registration error (TRE) were 0.47 ± 0.45 mm with 98% of landmark pairs having a TRE smaller than 2 mm for 10 digital phantom cases. Tests using 300 manually labeled landmark pairs in 10 lung 4DCT benchmark datasets (DIRLAB) produced TRE results of 0.73 ± 0.53 mm with 97% of landmark pairs having a TRE smaller than 2 mm. CONCLUSION A new method was developed to automatically and precisely detect a large quantity of landmark pairs between lung CT image pairs. The detected landmark pairs could be used as benchmark datasets for more accurate and informative quantitative evaluation of DIR algorithms.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, USA
| | - Xue Wu
- Department of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, USA
| | - Allan M. Thomas
- Department of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, USA
| | - Harold H. Li
- Department of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, USA
| | - Deshan Yang
- Department of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, USA
| |
Collapse
|
307
|
Dalca AV, Yu E, Golland P, Fischl B, Sabuncu MR, Iglesias JE. Unsupervised Deep Learning for Bayesian Brain MRI Segmentation. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2019; 11766:356-365. [PMID: 32432231 PMCID: PMC7235150 DOI: 10.1007/978-3-030-32248-9_40] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
Probabilistic atlas priors have been commonly used to derive adaptive and robust brain MRI segmentation algorithms. Widely-used neuroimage analysis pipelines rely heavily on these techniques, which are often computationally expensive. In contrast, there has been a recent surge of approaches that leverage deep learning to implement segmentation tools that are computationally efficient at test time. However, most of these strategies rely on learning from manually annotated images. These supervised deep learning methods are therefore sensitive to the intensity profiles in the training dataset. To develop a deep learning-based segmentation model for a new image dataset (e.g., of different contrast), one usually needs to create a new labeled training dataset, which can be prohibitively expensive, or rely on suboptimal ad hoc adaptation or augmentation approaches. In this paper, we propose an alternative strategy that combines a conventional probabilistic atlas-based segmentation with deep learning, enabling one to train a segmentation model for new MRI scans without the need for any manually segmented images. Our experiments include thousands of brain MRI scans and demonstrate that the proposed method achieves good accuracy for a brain MRI segmentation task for different MRI contrasts, requiring only approximately 15 seconds at test time on a GPU.
Collapse
Affiliation(s)
- Adrian V Dalca
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology
| | - Evan Yu
- Meinig School of Biomedical Engineering, Cornell University
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology
| | - Bruce Fischl
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
| | - Mert R Sabuncu
- Meinig School of Biomedical Engineering, Cornell University
- School of Electrical and Computer Engineering, Cornell University
| | - Juan Eugenio Iglesias
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology
- Centre for Medical Image Computing (CMIC), University College London
| |
Collapse
|
308
|
S P, A ET. A Study on Robustness of Various Deformable Image Registration Algorithms on Image Reconstruction Using 4DCT Thoracic Images. J Biomed Phys Eng 2019; 9:559-568. [PMID: 31750270 PMCID: PMC6820026 DOI: 10.31661/jbpe.v0i0.377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2015] [Accepted: 07/16/2015] [Indexed: 11/16/2022]
Abstract
Background Medical image interpolation is recently introduced as a helpful tool to obtain further information via initial available images taken by tomography systems. To do this, deformable image registration algorithms are mainly utilized to perform image interpolation using tomography images. Materials and Methods In this work, 4DCT thoracic images of five real patients provided by DIR-lab group were utilized. Four implemented registration algorithms as 1) Original Horn-Schunck, 2) Inverse consistent Horn-Schunck, 3) Original Demons and 4) Fast Demons were implemented by means of DIRART software packages. Then, the calculated vector fields are processed to reconstruct 4DCT images at any desired time using optical flow based on interpolation method. As a comparative study, the accuracy of interpolated image obtained by each strategy is measured by calculating mean square error between the interpolated image and real middle image as ground truth dataset. Results Final results represent the ability to accomplish image interpolation among given two-paired images. Among them, Inverse Consistent Horn-Schunck algorithm has the best performance to reconstruct interpolated image with the highest accuracy while Demons method had the worst performance. Conclusion Since image interpolation is affected by increasing the distance between two given available images, the performance accuracy of four different registration algorithms is investigated concerning this issue. As a result, Inverse Consistent Horn-Schunck does not essentially have the best performance especially in facing large displacements happened due to distance increment.
Collapse
Affiliation(s)
- Parande S
- MSc, Faculty of Sciences and Modern Technologies Graduate University of Advanced Technology Haftbagh St. Kerman Iran
| | - Esmaili Torshabi A
- PhD, Faculty of Sciences and Modern Technologies Graduate University of Advanced Technology Haftbagh St. Kerman Iran
| |
Collapse
|
309
|
Agier R, Valette S, Kéchichian R, Fanton L, Prost R. Hubless keypoint-based 3D deformable groupwise registration. Med Image Anal 2019; 59:101564. [PMID: 31590032 DOI: 10.1016/j.media.2019.101564] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 08/05/2019] [Accepted: 09/19/2019] [Indexed: 11/30/2022]
Abstract
We present a novel algorithm for Fast Registration Of image Groups (FROG), applied to large 3D image groups. Our approach extracts 3D SURF keypoints from images, computes matched pairs of keypoints and registers the group by minimizing pair distances in a hubless way i.e. without computing any central mean image. Using keypoints significantly reduces the problem complexity compared to voxel-based approaches, and enables us to provide an in-core global optimization, similar to the Bundle Adjustment for 3D reconstruction. As we aim to register images of different patients, the matching step yields many outliers. Then we propose a new EM-weighting algorithm which efficiently discards outliers. Global optimization is carried out with a fast gradient descent algorithm. This allows our approach to robustly register large datasets. The result is a set of diffeomorphic half transforms which link the volumes together and can be subsequently exploited for computational anatomy and landmark detection. We show experimental results on whole-body CT scans, with groups of up to 103 volumes. On a benchmark based on anatomical landmarks, our algorithm compares favorably with the star-groupwise voxel-based ANTs and NiftyReg approaches while being much faster. We also discuss the limitations of our approach for lower resolution images such as brain MRI.
Collapse
Affiliation(s)
- R Agier
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France
| | - S Valette
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France.
| | - R Kéchichian
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France
| | - L Fanton
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France; Hospices Civils de Lyon, GHC, Hôpital Edouard-Herriot, Service de médecine légale, LYON 69003, FRANCE
| | - R Prost
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France
| |
Collapse
|
310
|
Garcia Guevara J, Peterlik I, Berger MO, Cotin S. Elastic Registration Based on Compliance Analysis and Biomechanical Graph Matching. Ann Biomed Eng 2019; 48:447-462. [DOI: 10.1007/s10439-019-02364-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 09/12/2019] [Indexed: 12/21/2022]
|
311
|
Kartasalo K, Latonen L, Vihinen J, Visakorpi T, Nykter M, Ruusuvuori P. Comparative analysis of tissue reconstruction algorithms for 3D histology. Bioinformatics 2019; 34:3013-3021. [PMID: 29684099 PMCID: PMC6129300 DOI: 10.1093/bioinformatics/bty210] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Accepted: 04/18/2018] [Indexed: 12/05/2022] Open
Abstract
Motivation Digital pathology enables new approaches that expand beyond storage, visualization or analysis of histological samples in digital format. One novel opportunity is 3D histology, where a three-dimensional reconstruction of the sample is formed computationally based on serial tissue sections. This allows examining tissue architecture in 3D, for example, for diagnostic purposes. Importantly, 3D histology enables joint mapping of cellular morphology with spatially resolved omics data in the true 3D context of the tissue at microscopic resolution. Several algorithms have been proposed for the reconstruction task, but a quantitative comparison of their accuracy is lacking. Results We developed a benchmarking framework to evaluate the accuracy of several free and commercial 3D reconstruction methods using two whole slide image datasets. The results provide a solid basis for further development and application of 3D histology algorithms and indicate that methods capable of compensating for local tissue deformation are superior to simpler approaches. Availability and implementation Code: https://github.com/BioimageInformaticsTampere/RegBenchmark. Whole slide image datasets: http://urn.fi/urn: nbn: fi: csc-kata20170705131652639702. Supplementary information Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Kimmo Kartasalo
- Faculty of Medicine and Life Sciences, University of Tampere, Tampere, Finland.,Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland.,BioMediTech Institute, Tampere, Finland
| | - Leena Latonen
- Faculty of Medicine and Life Sciences, University of Tampere, Tampere, Finland.,BioMediTech Institute, Tampere, Finland.,Fimlab Laboratories, Tampere University Hospital, Tampere, Finland
| | - Jorma Vihinen
- Faculty of Engineering Sciences, Tampere University of Technology, Tampere, Finland
| | - Tapio Visakorpi
- Faculty of Medicine and Life Sciences, University of Tampere, Tampere, Finland.,BioMediTech Institute, Tampere, Finland.,Fimlab Laboratories, Tampere University Hospital, Tampere, Finland
| | - Matti Nykter
- Faculty of Medicine and Life Sciences, University of Tampere, Tampere, Finland.,Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland.,BioMediTech Institute, Tampere, Finland
| | - Pekka Ruusuvuori
- Faculty of Medicine and Life Sciences, University of Tampere, Tampere, Finland.,BioMediTech Institute, Tampere, Finland.,Faculty of Computing and Electrical Engineering, Tampere University of Technology, Tampere 33101, Finland
| |
Collapse
|
312
|
Memory-efficient 2.5D convolutional transformer networks for multi-modal deformable registration with weak label supervision applied to whole-heart CT and MRI scans. Int J Comput Assist Radiol Surg 2019; 14:1901-1912. [DOI: 10.1007/s11548-019-02068-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Accepted: 09/04/2019] [Indexed: 12/16/2022]
|
313
|
Sahyoun CC, Subhash HM, Peru D, Ellwood RP, Pierce MC. An Experimental Review of Optical Coherence Tomography Systems for Noninvasive Assessment of Hard Dental Tissues. Caries Res 2019; 54:43-54. [PMID: 31533102 DOI: 10.1159/000502375] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 07/26/2019] [Indexed: 11/19/2022] Open
Abstract
Optical coherence tomography (OCT) is a noninvasive, high-resolution, cross-sectional imaging technique. To date, OCT has been demonstrated in several areas of dentistry, primarily using wavelengths around 1,300 nm, low numerical aperture (NA) imaging lenses, and detectors insensitive to the polarization of light. The objective of this study is to compare the performance of three commercially available OCT systems operating with alternative wavelengths, imaging lenses, and detectors for OCT imaging of dental enamel. Spectral-domain (SD) OCT systems with (i) 840 nm (Lumedica, OQ LabScope 1.0), (ii) 1,300 nm (Thorlabs, Tel320) center wavelengths, and (iii) a swept-source (SS) OCT system (Thorlabs OCS1300SS) centered at 1,325 nm with optional polarization-sensitive detection were used. Low NA (0.04) and high NA (0.15) imaging lenses were used with system (iii). Healthy in vivo and in vitrohuman enamel and eroded in vitro bovine enamel specimens were imaged. The Tel320 system achieved greater imaging depth than the OQ LabScope 1.0, on average imaging 2.6 times deeper into the tooth (n = 10). The low NA lens provided a larger field of view and depth of focus, while the high NA lens provided higher lateral resolution and greater contrast. Polarization-sensitive imaging eliminated birefringent banding artifacts that can appear in conventional OCT scans. In summary, this study illustrates the performance of three commercially available OCT systems, objective lenses, and imaging modes and how these can affect imaging depth, resolution, field of view, and contrast in enamel. Users investigating OCT for dental applications should consider these factors when selecting an OCT system for clinical or basic science studies.
Collapse
Affiliation(s)
- Christine C Sahyoun
- Department of Biomedical Engineering, Rutgers, the State University of New Jersey, Piscataway, New Jersey, USA
| | - Hrebesh M Subhash
- Global Development Center, Colgate-Palmolive Company, Piscataway, New Jersey, USA
| | - Deborah Peru
- Global Development Center, Colgate-Palmolive Company, Piscataway, New Jersey, USA
| | - Roger P Ellwood
- Global Development Center, Colgate-Palmolive Company, Piscataway, New Jersey, USA
| | - Mark C Pierce
- Department of Biomedical Engineering, Rutgers, the State University of New Jersey, Piscataway, New Jersey, USA,
| |
Collapse
|
314
|
Ai D, Liu D, Wang Y, Fu T, Huang Y, Jiang Y, Song H, Wang Y, Liang P, Yang J. Nonrigid registration for tracking incompressible soft tissues with sliding motion. Med Phys 2019; 46:4923-4939. [DOI: 10.1002/mp.13694] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Revised: 05/22/2019] [Accepted: 06/14/2019] [Indexed: 12/15/2022] Open
Affiliation(s)
- Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics Beijing Institute of Technology Beijing 100081 China
| | - Dingkun Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics Beijing Institute of Technology Beijing 100081 China
| | - Yifan Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics Beijing Institute of Technology Beijing 100081 China
| | - Tianyu Fu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics Beijing Institute of Technology Beijing 100081 China
| | - Yong Huang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics Beijing Institute of Technology Beijing 100081 China
| | - Yurong Jiang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics Beijing Institute of Technology Beijing 100081 China
| | - Hong Song
- School of Computer Science & Technology Beijing Institute of Technology Beijing 100081 China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics Beijing Institute of Technology Beijing 100081 China
- AICFVE of Beijing Film AcademyBeijing 100088 China
| | - Ping Liang
- Department of Interventional Ultrasonics General Hospital of Chinese PLA Beijing 100853 China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics Beijing Institute of Technology Beijing 100081 China
| |
Collapse
|
315
|
Kim S, Ra JB. Dynamic focal plane estimation for dental panoramic radiography. Med Phys 2019; 46:4907-4917. [PMID: 31520417 DOI: 10.1002/mp.13823] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Revised: 09/02/2019] [Accepted: 09/03/2019] [Indexed: 11/10/2022] Open
Abstract
PURPOSE The digital panoramic radiography is widely used in dental clinics and provides the anatomical information of the intraoral structure along the predefined arc-shaped path. Since the intraoral structure varies depending on the patient, however, it is nearly impossible to design a common and static focal path or plane fitted to the dentition of all patients. In response, we introduce an imaging algorithm for digital panoramic radiography that can provide a focused panoramic radiographic image for all patients, by automatically estimating the best focal plane for each patient. METHODS The aim of this study is to improve the image quality of dental panoramic radiography based on a three-dimensional (3D) dynamic focal plane. The plane is newly introduced to represent the arbitrary 3D intraoral structure of each patient. The proposed algorithm consists of three steps: preprocessing, focal plane estimation, and image reconstruction. We first perform preprocessing to improve the accuracy of focal plane estimation. The 3D dynamic focal plane is then estimated by adjusting the position of the image plane so that object boundaries in the neighboring projection data are aligned or focused on the plane. Finally, a panoramic radiographic image is reconstructed using the estimated dynamic focal plane. RESULTS The proposed algorithm is evaluated using a numerical phantom dataset and four clinical human datasets. In order to examine the image quality improvement owing to the proposed algorithm, we generate panoramic radiographic images based on a conventional static focal plane and estimated 3D dynamic focal planes, respectively. Experimental results show that the image quality is dramatically improved for all datasets using the 3D dynamic focal planes that are estimated from the proposed algorithm. CONCLUSIONS We propose an imaging algorithm for digital panoramic radiography that provides improved image quality by estimating dynamic focal planes fitted to each individual patient's intraoral structure.
Collapse
Affiliation(s)
- Seungeon Kim
- School of Electrical Engineering, KAIST, Daejeon, Republic of Korea
| | - Jong Beom Ra
- School of Electrical Engineering, KAIST, Daejeon, Republic of Korea
| |
Collapse
|
316
|
Krebs J, Delingette H, Mailhe B, Ayache N, Mansi T. Learning a Probabilistic Model for Diffeomorphic Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2165-2176. [PMID: 30716033 DOI: 10.1109/tmi.2019.2897112] [Citation(s) in RCA: 76] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We propose to learn a low-dimensional probabilistic deformation model from data which can be used for the registration and the analysis of deformations. The latent variable model maps similar deformations close to each other in an encoding space. It enables to compare deformations, to generate normal or pathological deformations for any new image, or to transport deformations from one image pair to any other image. Our unsupervised method is based on the variational inference. In particular, we use a conditional variational autoencoder network and constrain transformations to be symmetric and diffeomorphic by applying a differentiable exponentiation layer with a symmetric loss function. We also present a formulation that includes spatial regularization such as the diffusion-based filters. In addition, our framework provides multi-scale velocity field estimations. We evaluated our method on 3-D intra-subject registration using 334 cardiac cine-MRIs. On this dataset, our method showed the state-of-the-art performance with a mean DICE score of 81.2% and a mean Hausdorff distance of 7.3 mm using 32 latent dimensions compared to three state-of-the-art methods while also demonstrating more regular deformation fields. The average time per registration was 0.32 s. Besides, we visualized the learned latent space and showed that the encoded deformations can be used to transport deformations and to cluster diseases with a classification accuracy of 83% after applying a linear projection.
Collapse
|
317
|
Jahani N, Cohen E, Hsieh MK, Weinstein SP, Pantalone L, Hylton N, Newitt D, Davatzikos C, Kontos D. Prediction of Treatment Response to Neoadjuvant Chemotherapy for Breast Cancer via Early Changes in Tumor Heterogeneity Captured by DCE-MRI Registration. Sci Rep 2019; 9:12114. [PMID: 31431633 PMCID: PMC6702160 DOI: 10.1038/s41598-019-48465-x] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2019] [Accepted: 08/05/2019] [Indexed: 12/11/2022] Open
Abstract
We analyzed DCE-MR images from 132 women with locally advanced breast cancer from the I-SPY1 trial to evaluate changes of intra-tumor heterogeneity for augmenting early prediction of pathologic complete response (pCR) and recurrence-free survival (RFS) after neoadjuvant chemotherapy (NAC). Utilizing image registration, voxel-wise changes including tumor deformations and changes in DCE-MRI kinetic features were computed to characterize heterogeneous changes within the tumor. Using five-fold cross-validation, logistic regression and Cox regression were performed to model pCR and RFS, respectively. The extracted imaging features were evaluated in augmenting established predictors, including functional tumor volume (FTV) and histopathologic and demographic factors, using the area under the curve (AUC) and the C-statistic as performance measures. The extracted voxel-wise features were also compared to analogous conventional aggregated features to evaluate the potential advantage of voxel-wise analysis. Voxel-wise features improved prediction of pCR (AUC = 0.78 (±0.03) vs 0.71 (±0.04), p < 0.05 and RFS (C-statistic = 0.76 ( ± 0.05), vs 0.63 ( ± 0.01)), p < 0.05, while models based on analogous aggregate imaging features did not show appreciable performance changes (p > 0.05). Furthermore, all selected voxel-wise features demonstrated significant association with outcome (p < 0.05). Thus, precise measures of voxel-wise changes in tumor heterogeneity extracted from registered DCE-MRI scans can improve early prediction of neoadjuvant treatment outcomes in locally advanced breast cancer.
Collapse
Affiliation(s)
- Nariman Jahani
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Eric Cohen
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Meng-Kang Hsieh
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Susan P Weinstein
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Lauren Pantalone
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Nola Hylton
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, 94115, USA
| | - David Newitt
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, 94115, USA
| | - Christos Davatzikos
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Despina Kontos
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
318
|
Ren T, Wang H, Feng H, Xu C, Liu G, Ding P. Study on the improved fuzzy clustering algorithm and its application in brain image segmentation. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.105503] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
319
|
Robust 3D image reconstruction of pancreatic cancer tumors from histopathological images with different stains and its quantitative performance evaluation. Int J Comput Assist Radiol Surg 2019; 14:2047-2055. [PMID: 31267332 PMCID: PMC6858398 DOI: 10.1007/s11548-019-02019-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Accepted: 06/24/2019] [Indexed: 11/30/2022]
Abstract
Purpose Histopathological imaging is widely used for the analysis and diagnosis of multiple diseases. Several methods have been proposed for the 3D reconstruction of pathological images, captured from thin sections of a given specimen, which get nonlinearly deformed due to the preparation process. The majority of the available methods for registering such images use the degree of matching of adjacent images as the criteria for registration, which can result in unnatural deformations of the anatomical structures. Moreover, most methods assume that the same staining is used for all images, when in fact multiple staining is usually applied in order to enhance different structures in the images. Methods This paper proposes a non-rigid 3D reconstruction method based on the assumption that internal structures on the original tissue must be smooth and continuous. Landmarks are detected along anatomical structures using template matching based on normalized cross-correlation (NCC), forming jagged shape trajectories that traverse several slices. The registration process smooths out these trajectories and deforms the images accordingly. Artifacts are automatically handled by using the confidence of the NCC in order to reject unreliable landmarks. Results The proposed method was applied to a large series of histological sections from the pancreas of a KPC mouse. Some portions were dyed primarily with HE stain, while others were dyed alternately with HE, CK19, MT and Ki67 stains. A new evaluation method is proposed to quantitatively evaluate the smoothness and isotropy of the obtained reconstructions, both for single and multiple staining. Conclusions The experimental results show that the proposed method produces smooth and nearly isotropic 3D reconstructions of pathological images with either single or multiple stains. From these reconstructions, microanatomical structures enhanced by different stains can be simultaneously observed.
Collapse
|
320
|
Review on Retrospective Procedures to Correct Retinal Motion Artefacts in OCT Imaging. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9132700] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Motion artefacts from involuntary changes in eye fixation remain a major imaging issue in optical coherence tomography (OCT). This paper reviews the state-of-the-art of retrospective procedures to correct retinal motion and axial eye motion artefacts in OCT imaging. Following an overview of motion induced artefacts and correction strategies, a chronological survey of retrospective approaches since the introduction of OCT until the current days is presented. Pre-processing, registration, and validation techniques are described. The review finishes by discussing the limitations of the current techniques and the challenges to be tackled in future developments.
Collapse
|
321
|
Ferrante E, Dokania PK, Silva RM, Paragios N. Weakly Supervised Learning of Metric Aggregations for Deformable Image Registration. IEEE J Biomed Health Inform 2019; 23:1374-1384. [DOI: 10.1109/jbhi.2018.2869700] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
322
|
Zheng Q, Wang Y, Heng PA. Online Subspace Learning from Gradient Orientations for Robust Image Alignment. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:3383-3394. [PMID: 30714923 DOI: 10.1109/tip.2019.2896528] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Robust and efficient image alignment remains a challenging task, due to the massiveness of images, great illumination variations between images, partial occlusion, and corruption. To address these challenges, we propose an online image alignment method via subspace learning from image gradient orientations (IGOs). The proposed method integrates the subspace learning, transformed the IGO reconstruction and image alignment into a unified online framework, which is robust for aligning images with severe intensity distortions. Our method is motivated by a principal component analysis (PCA) from gradient orientations that provides more reliable low-dimensional subspace than that from pixel intensities. Instead of processing in the intensity-domain-like conventional methods, we seek alignment in the IGO domain, such that the aligned IGO of the newly arrived image can be decomposed as the sum of a sparse error and a linear composition of the IGO-PCA basis learned from previously well-aligned ones. The optimization problem is tackled by an iterative linearization that minimizes the l1 -norm of the sparse error. Furthermore, the IGO-PCA basis is adaptively updated based on incremental thin singular value decomposition, which takes the shift of IGO mean into consideration. The efficacy of the proposed method is validated on the extensive challenging datasets through image alignment, medical atlas construction, and face recognition. The experimental results demonstrate that our algorithm provides more illumination- and occlusion-robust image alignment than the state-of-the-art methods.
Collapse
|
323
|
Surface deformation analysis of collapsed lungs using model-based shape matching. Int J Comput Assist Radiol Surg 2019; 14:1763-1774. [PMID: 31250255 PMCID: PMC6797649 DOI: 10.1007/s11548-019-02013-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 06/05/2019] [Indexed: 11/05/2022]
Abstract
Purpose To facilitate intraoperative localization of lung nodules, this study used model-based shape matching techniques to analyze the inter-subject three-dimensional surface deformation induced by pneumothorax. Methods: Contrast- enhanced computed tomography (CT) images of the left lungs of 11 live beagle dogs were acquired at two bronchial pressures (14 and 2 cm\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\,\hbox {H}_2\hbox {O}$$\end{document}H2O). To address shape matching problems for largely deformed lung images with pixel intensity shift, a complete Laplacian-based shape matching solution that optimizes the differential displacement field was introduced. Results Experiments were performed to confirm the methods’ registration accuracy using CT images of lungs. Shape similarity and target displacement errors in the registered models were improved compared with those from existing shape matching methods. Spatial displacement of the whole lung’s surface was visualized with an average error of within 5 mm. Conclusion The proposed methods address problems with the matching of surfaces with large curvatures and deformations and achieved smaller registration errors than existing shape matching methods, even at the tip and ridge regions. The findings and inter-subject statistical representation are directly available for further research on pneumothorax deformation modeling. Electronic supplementary material The online version of this article (10.1007/s11548-019-02013-0) contains supplementary material, which is available to authorized users.
Collapse
|
324
|
Nie Z, Yang X. Deformable Image Registration Using Functions of Bounded Deformation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1488-1500. [PMID: 30714914 DOI: 10.1109/tmi.2019.2896170] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Deformable image registration is a widely used technique in the field of computer vision and medical image processing. Basically, the task of deformable image registration is to find the displacement field between the moving image and the fixed image. Many variational models are proposed for deformable image registration, under the assumption that the displacement field is continuous and smooth. However, displacement fields may be discontinuous, especially for medical images with intensity inhomogeneity, pathological tissues, or heavy noises. In the mathematical theory of elastoplasticity, when the displacement fields are possibly discontinuous, a suitable framework for describing the displacement fields is the space of functions of bounded deformation (BD). Inspired by this, we propose a novel deformable registration model, called the BD model, which allows discontinuities of displacement fields in images. The BD model is formulated in a variational framework by supposing the displacement field to be a function of BD. The existence of solutions of this model is proven. Numerical experiments on 2D images show that the BD model outperforms the classical demons model, the log-domain diffeomorphic demons model, and the state-of-the-art vectorial total variation model. Numerical experiments on two public 3D databases show that the target registration error of the BD model is competitive compared with more than ten other models.
Collapse
|
325
|
Bakas S, Doulgerakis-Kontoudis M, Hunter GJA, Sidhu PS, Makris D, Chatzimichail K. Evaluation of Indirect Methods for Motion Compensation in 2-D Focal Liver Lesion Contrast-Enhanced Ultrasound (CEUS) Imaging. ULTRASOUND IN MEDICINE & BIOLOGY 2019; 45:1380-1396. [PMID: 30952468 DOI: 10.1016/j.ultrasmedbio.2019.01.023] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2018] [Revised: 01/05/2019] [Accepted: 01/27/2019] [Indexed: 05/14/2023]
Abstract
This study investigates the application and evaluation of existing indirect methods, namely point-based registration techniques, for the estimation and compensation of observed motion included in the 2-D image plane of contrast-enhanced ultrasound (CEUS) cine-loops recorded for the characterization and diagnosis of focal liver lesions (FLLs). The value of applying motion compensation in the challenging modality of CEUS is to assist in the quantification of the perfusion dynamics of an FLL in relation to its parenchyma, allowing for a potentially accurate diagnostic suggestion. Towards this end, this study also proposes a novel quantitative multi-level framework for evaluating the quantification of FLLs, which to the best of our knowledge remains undefined, notwithstanding many relevant studies. Following quantitative evaluation of 19 indirect algorithms and configurations, while also considering the requirement for computational efficiency, our results suggest that the "compact and real-time descriptor" (CARD) is the optimal indirect motion compensation method in CEUS.
Collapse
Affiliation(s)
- Spyridon Bakas
- Digital Information Research Centre (DIRC), School of Computer Science & Mathematics, Faculty of Science, Engineering and Computing (SEC), Kingston University, London, United Kingdom; Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, University of Pennsylvania, Richards Medical Research Laboratories, Hamilton Walk, Philadelphia, Pennsylvania, USA.
| | - Matthaios Doulgerakis-Kontoudis
- Digital Information Research Centre (DIRC), School of Computer Science & Mathematics, Faculty of Science, Engineering and Computing (SEC), Kingston University, London, United Kingdom; Medical Imaging and Image Interpretation Group, School of Computer Science, University of Birmingham, Edgbaston, United Kingdom
| | - Gordon J A Hunter
- Digital Information Research Centre (DIRC), School of Computer Science & Mathematics, Faculty of Science, Engineering and Computing (SEC), Kingston University, London, United Kingdom
| | - Paul S Sidhu
- Department of Radiology, King's College Hospital, London, United Kingdom
| | - Dimitrios Makris
- Digital Information Research Centre (DIRC), School of Computer Science & Mathematics, Faculty of Science, Engineering and Computing (SEC), Kingston University, London, United Kingdom
| | - Katerina Chatzimichail
- Radiology & Imaging Research Centre, Evgenidion Hospital, National and Kapodistrian University, Ilisia, Athens, Greece
| |
Collapse
|
326
|
Wang Z, Balgobind BV, Virgolin M, van Dijk IWEM, Wiersma J, Ronckers CM, Bosman PAN, Bel A, Alderliesten T. How do patient characteristics and anatomical features correlate to accuracy of organ dose reconstruction for Wilms' tumor radiation treatment plans when using a surrogate patient's CT scan? JOURNAL OF RADIOLOGICAL PROTECTION : OFFICIAL JOURNAL OF THE SOCIETY FOR RADIOLOGICAL PROTECTION 2019; 39:598-619. [PMID: 30965301 DOI: 10.1088/1361-6498/ab1796] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In retrospective radiation treatment (RT) dosimetry, a surrogate anatomy is often used for patients without 3D CT. To gain insight in what the crucial aspects in a surrogate anatomy are to enable accurate dose reconstruction, we investigated the relation of patient characteristics and internal anatomical features with deviations in reconstructed organ dose using surrogate patient's CT scans. Abdominal CT scans of 35 childhood cancer patients (age: 2.1-5.6 yr; 17 boys, 18 girls) undergoing RT during 2004-2016 were included. Based on whether an intact right or left kidney is present in the CT scan, two groups were formed each containing 24 patients. From each group, four CTs associated with Wilms' tumor RT plans with an anterior-posterior-posterior-anterior field setup were selected as references. For each reference, a 2D digitally reconstructed radiograph was computed from the reference CT to simulate a 2D radiographic image and dose reconstruction was performed on the other CTs in the respective group. Deviations in organ mean dose (DEmean) of the reconstructions versus the references were calculated, as were deviations in patient characteristics (i.e. age, height, weight) and in anatomical features including organ volume, location (in 3D), and spatial overlaps. Per reference, the Pearson's correlation coefficient between deviations in DEmean and patient characteristics/features were studied. Deviation in organ locations and DEmean for the liver, spleen, and right kidney were moderately correlated (R2 > 0.5) for 8/8, 5/8, and 3/4 reference plans, respectively. Deviations in organ volume or spatial overlap and DEmean for the right and left kidney were weakly correlated (0.3 < R2 < 0.5) in 4/4 and 1/4 reference plans. No correlations (R2 < 0.3) were found between deviations in age or height and DEmean. Therefore, the performance of organ dose reconstruction using surrogate patients' CT scans is primarily related to deviation in organ location, followed by volume and spatial overlap. Further, results were plan dependent.
Collapse
Affiliation(s)
- Ziyuan Wang
- Department of Radiation Oncology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, the Netherlands
| | | | | | | | | | | | | | | | | |
Collapse
|
327
|
Evolutionary Machine Learning for Multi-Objective Class Solutions in Medical Deformable Image Registration. ALGORITHMS 2019. [DOI: 10.3390/a12050099] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Current state-of-the-art medical deformable image registration (DIR) methods optimize a weighted sum of key objectives of interest. Having a pre-determined weight combination that leads to high-quality results for any instance of a specific DIR problem (i.e., a class solution) would facilitate clinical application of DIR. However, such a combination can vary widely for each instance and is currently often manually determined. A multi-objective optimization approach for DIR removes the need for manual tuning, providing a set of high-quality trade-off solutions. Here, we investigate machine learning for a multi-objective class solution, i.e., not a single weight combination, but a set thereof, that, when used on any instance of a specific DIR problem, approximates such a set of trade-off solutions. To this end, we employed a multi-objective evolutionary algorithm to learn sets of weight combinations for three breast DIR problems of increasing difficulty: 10 prone-prone cases, 4 prone-supine cases with limited deformations and 6 prone-supine cases with larger deformations and image artefacts. Clinically-acceptable results were obtained for the first two problems. Therefore, for DIR problems with limited deformations, a multi-objective class solution can be machine learned and used to compute straightforwardly multiple high-quality DIR outcomes, potentially leading to more efficient use of DIR in clinical practice.
Collapse
|
328
|
Sun L, Zu C, Shao W, Guang J, Zhang D, Liu M. Reliability-based robust multi-atlas label fusion for brain MRI segmentation. Artif Intell Med 2019; 96:12-24. [DOI: 10.1016/j.artmed.2019.03.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 03/04/2019] [Accepted: 03/05/2019] [Indexed: 10/27/2022]
|
329
|
Szmul A, Matin T, Gleeson FV, Schnabel JA, Grau V, Papież BW. Patch-based lung ventilation estimation using multi-layer supervoxels. Comput Med Imaging Graph 2019; 74:49-60. [PMID: 31009928 DOI: 10.1016/j.compmedimag.2019.04.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 03/31/2019] [Accepted: 04/02/2019] [Indexed: 01/03/2023]
Abstract
Patch-based approaches have received substantial attention over the recent years in medical imaging. One of their potential applications may be to provide more anatomically consistent ventilation maps estimated on dynamic lung CT. An assessment of regional lung function may act as a guide for radiotherapy, ensuring a more accurate treatment plan. This in turn, could spare well-functioning parts of the lungs. We present a novel method for lung ventilation estimation from dynamic lung CT imaging, combining a supervoxel-based image representation with deformations estimated during deformable image registration, performed between peak breathing phases. For this we propose a method that tracks changes of the intensity of previously extracted supervoxels. For the evaluation of the method we calculate correlation of the estimated ventilation maps with static ventilation images acquired from hyperpolarized Xenon129 MRI. We also investigate the influence of different image registration methods used to estimate deformations between the peak breathing phases in the dynamic CT imaging. We show that our method performs favorably to other ventilation estimation methods commonly used in the field, independently of the image registration method applied to dynamic CT. Due to the patch-based approach of our method, it may be physiologically more consistent with lung anatomy than previous methods relying on voxel-wise relationships. In our method the ventilation is estimated for supervoxels, which tend to group spatially close voxels with similar intensity values. The proposed method was evaluated on a dataset consisting of three lung cancer patients undergoing radiotherapy treatment, and this resulted in a correlation of 0.485 with XeMRI ventilation images, compared with 0.393 for the intensity-based approach, 0.231 for the Jacobian-based method and 0.386 for the Hounsfield units averaging method, on average. Within the limitation of the small number of cases analyzed, results suggest that the presented technique may be advantageous for CT-based ventilation estimation. The results showing higher values of correlation of the proposed method demonstrate the potential of our method to more accurately mimic the lung physiology.
Collapse
Affiliation(s)
- Adam Szmul
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK.
| | - Tahreema Matin
- Department of Radiology, Oxford University Hospitals NHS FT, Oxford, UK
| | - Fergus V Gleeson
- Department of Oncology, University of Oxford, UK; Department of Radiology, Oxford University Hospitals NHS FT, Oxford, UK
| | - Julia A Schnabel
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK; Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences, King's College London, UK
| | - Vicente Grau
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK
| | - Bartłomiej W Papież
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK; Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, UK
| |
Collapse
|
330
|
Scheufele K, Mang A, Gholami A, Davatzikos C, Biros G, Mehl M. Coupling brain-tumor biophysical models and diffeomorphic image registration. COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING 2019; 347:533-567. [PMID: 31857736 PMCID: PMC6922029 DOI: 10.1016/j.cma.2018.12.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
We present SIBIA (Scalable Integrated Biophysics-based Image Analysis), a framework for joint image registration and biophysical inversion and we apply it to analyze MR images of glioblastomas (primary brain tumors). We have two applications in mind. The first one is normal-to-abnormal image registration in the presence of tumor-induced topology differences. The second one is biophysical inversion based on single-time patient data. The underlying optimization problem is highly non-linear and non-convex and has not been solved before with a gradient-based approach. Given the segmentation of a normal brain MRI and the segmentation of a cancer patient MRI, we determine tumor growth parameters and a registration map so that if we "grow a tumor" (using our tumor model) in the normal brain and then register it to the patient image, then the registration mismatch is as small as possible. This "coupled problem" two-way couples the biophysical inversion and the registration problem. In the image registration step we solve a large-deformation diffeomorphic registration problem parameterized by an Eulerian velocity field. In the biophysical inversion step we estimate parameters in a reaction-diffusion tumor growth model that is formulated as a partial differential equation (PDE). In SIBIA, we couple these two sub-components in an iterative manner. We first presented the components of SIBIA in "Gholami et al., Framework for Scalable Biophysics-based Image Analysis, IEEE/ACM Proceedings of the SC2017", in which we derived parallel distributed memory algorithms and software modules for the decoupled registration and biophysical inverse problems. In this paper, our contributions are the introduction of a PDE-constrained optimization formulation of the coupled problem, and the derivation of a Picard iterative solution scheme. We perform extensive tests to experimentally assess the performance of our method on synthetic and clinical datasets. We demonstrate the convergence of the SIBIA optimization solver in different usage scenarios. We demonstrate that using SIBIA, we can accurately solve the coupled problem in three dimensions (2563 resolution) in a few minutes using 11 dual-x86 nodes.
Collapse
Affiliation(s)
- Klaudius Scheufele
- University of Stuttgart, IPVS, Universitätstraße 38, 70569 Stuttgart, Germany
| | - Andreas Mang
- University of Houston, Department of Mathematics, 3551 Cullen Blvd., Houston, TX 77204-3008, USA
| | - Amir Gholami
- University of California Berkeley, EECS, Berkeley, CA 94720-1776, USA
| | - Christos Davatzikos
- Department of Radiology, University of Pennsylvania School of Medicine, 3700 Hamilton Walk, Philadelphia, PA 19104, USA
| | - George Biros
- University of Texas, ICES, 201 East 24th St, Austin, TX 78712-1229, USA
| | - Miriam Mehl
- University of Stuttgart, IPVS, Universitätstraße 38, 70569 Stuttgart, Germany
| |
Collapse
|
331
|
Lin C, Wang Y, Wang T, Ni D. Low-Rank Based Image Analyses for Pathological MR Image Segmentation and Recovery. Front Neurosci 2019; 13:333. [PMID: 31024244 PMCID: PMC6465608 DOI: 10.3389/fnins.2019.00333] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2019] [Accepted: 03/21/2019] [Indexed: 01/17/2023] Open
Abstract
The presence of pathologies in magnetic resonance (MR) brain images causes challenges in various image analysis areas, such as registration, atlas construction and atlas-based segmentation. We propose a novel method for the simultaneous recovery and segmentation of pathological MR brain images. Low-rank and sparse decomposition (LSD) approaches have been widely used in this field, decomposing pathological images into (1) low-rank components as recovered images, and (2) sparse components as pathological segmentation. However, conventional LSD approaches often fail to produce recovered images reliably, due to the lack of constraint between low-rank and sparse components. To tackle this problem, we propose a transformed low-rank and structured sparse decomposition (TLS2D) method. The proposed TLS2D integrates the structured sparse constraint, LSD and image alignment into a unified scheme, which is robust for distinguishing pathological regions. Furthermore, the well recovered images can be obtained using TLS2D with the combined structured sparse and computed image saliency as the adaptive sparsity constraint. The efficacy of the proposed method is verified on synthetic and real MR brain tumor images. Experimental results demonstrate that our method can effectively provide satisfactory image recovery and tumor segmentation.
Collapse
Affiliation(s)
| | - Yi Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Health Science Center, School of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | | | | |
Collapse
|
332
|
O'Donnell LJ, Daducci A, Wassermann D, Lenglet C. Advances in computational and statistical diffusion MRI. NMR IN BIOMEDICINE 2019; 32:e3805. [PMID: 29134716 PMCID: PMC5951736 DOI: 10.1002/nbm.3805] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2016] [Revised: 07/31/2017] [Accepted: 08/14/2017] [Indexed: 06/03/2023]
Abstract
Computational methods are crucial for the analysis of diffusion magnetic resonance imaging (MRI) of the brain. Computational diffusion MRI can provide rich information at many size scales, including local microstructure measures such as diffusion anisotropies or apparent axon diameters, whole-brain connectivity information that describes the brain's wiring diagram and population-based studies in health and disease. Many of the diffusion MRI analyses performed today were not possible five, ten or twenty years ago, due to the requirements for large amounts of computer memory or processor time. In addition, mathematical frameworks had to be developed or adapted from other fields to create new ways to analyze diffusion MRI data. The purpose of this review is to highlight recent computational and statistical advances in diffusion MRI and to put these advances into context by comparison with the more traditional computational methods that are in popular clinical and scientific use. We aim to provide a high-level overview of interest to diffusion MRI researchers, with a more in-depth treatment to illustrate selected computational advances.
Collapse
Affiliation(s)
- Lauren J O'Donnell
- Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA
| | - Alessandro Daducci
- Computer Science department, University of Verona, Verona, Italy
- Radiology department, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Demian Wassermann
- Athena Team, Inria Sophia Antipolis-Méditerranée, 2004 route des Lucioles, 06902 Biot, France
| | - Christophe Lenglet
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
333
|
Fan J, Cao X, Yap PT, Shen D. BIRNet: Brain image registration using dual-supervised fully convolutional networks. Med Image Anal 2019; 54:193-206. [PMID: 30939419 DOI: 10.1016/j.media.2019.03.006] [Citation(s) in RCA: 122] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Revised: 03/09/2019] [Accepted: 03/21/2019] [Indexed: 11/30/2022]
Abstract
In this paper, we propose a deep learning approach for image registration by predicting deformation from image appearance. Since obtaining ground-truth deformation fields for training can be challenging, we design a fully convolutional network that is subject to dual-guidance: (1) Ground-truth guidance using deformation fields obtained by an existing registration method; and (2) Image dissimilarity guidance using the difference between the images after registration. The latter guidance helps avoid overly relying on the supervision from the training deformation fields, which could be inaccurate. For effective training, we further improve the deep convolutional network with gap filling, hierarchical loss, and multi-source strategies. Experiments on a variety of datasets show promising registration accuracy and efficiency compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Jingfan Fan
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Xiaohuan Cao
- Shanghai United Imaging Intelligence Co. Ltd., Shanghai, China
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
334
|
Kahaki SMM, Wang SL, Stepanyants A. Accurate registration of in vivo time-lapse images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10949. [PMID: 30956384 DOI: 10.1117/12.2512257] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
In vivo imaging experiments often require automated detection and tracking of changes in the specimen. These tasks can be hindered by variations in the position and orientation of the specimen relative to the microscope, as well as by linear and nonlinear tissue deformations. We propose a feature-based registration method, coupled with optimal transformations, designed to address these problems in 3D time-lapse microscopy images. Features are detected as local regions of maximum intensity in source and target image stacks, and their bipartite intensity dissimilarity matrix is used as an input to the Hungarian algorithm to establish initial correspondences. A random sampling refinement method is employed to eliminate outliers, and the resulting set of corresponding features is used to determine an optimal translation, rigid, affine, or B-spline transformation for the registration of the source and target images. Accuracy of the proposed algorithm was tested on fluorescently labeled axons imaged over a 68-day period with a two-photon laser scanning microscope. To that end, multiple axons in individual stacks of images were traced semi-manually and optimized in 3D, and the distances between the corresponding traces were measured before and after the registration. The results show that there is a progressive improvement in the registration accuracy with increasing complexity of the transformations. In particular, sub-micrometer accuracy (2-3 voxels) was achieved with the regularized affine and B-spline transformations.
Collapse
Affiliation(s)
- Seyed M M Kahaki
- Department of Physics and Center for Interdisciplinary Research on Complex Systems, Northeastern University, Boston, MA 02115, USA
| | - Shih-Luen Wang
- Department of Physics and Center for Interdisciplinary Research on Complex Systems, Northeastern University, Boston, MA 02115, USA
| | - Armen Stepanyants
- Department of Physics and Center for Interdisciplinary Research on Complex Systems, Northeastern University, Boston, MA 02115, USA
| |
Collapse
|
335
|
Castillo E. Quadratic penalty method for intensity-based deformable image registration and 4DCT lung motion recovery. Med Phys 2019; 46:2194-2203. [PMID: 30801729 DOI: 10.1002/mp.13457] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 11/09/2022] Open
Abstract
Intensity-based deformable image registration (DIR) requires minimizing an image dissimilarity metric. Imaged anatomy, such as bones and vasculature, as well as the resolution of the digital grid, can often cause discontinuities in the corresponding objective function. Consequently, the application of a gradient-based optimization algorithm requires a preprocessing image smoothing to ensure the existence of necessary image derivatives. Simple block matching (exhaustive search) methods do not require image derivative approximations, but their general effectiveness is often hindered by erroneous solutions (outliers). Block match methods are therefore often coupled with a statistical outlier detection method to improve results. PURPOSE The purpose of this work is to present a spatially accurate, intensity-based DIR optimization formulation that can be solved with a straightforward gradient-free quadratic penalty algorithm and is suitable for 4D thoracic computed tomography (4DCT) registration. Additionally, a novel regularization strategy based on the well-known leave-one-out robust statistical model cross-validation method is introduced. METHODS The proposed Quadratic Penalty DIR (QPDIR) method minimizes both an image dissimilarity term, which is separable with respect to individual voxel displacements, and a regularization term derived from the classical leave-one-out cross-validation statistical method. The resulting DIR problem lends itself to a quadratic penalty function optimization approach, where each subproblem can be solved by straightforward block coordinate descent iteration. RESULTS The spatial accuracy of the method was assessed using expert-determined landmarks on ten 4DCT datasets available on www.dir-lab.com. The QPDIR algorithm achieved average millimeter spatial errors between 0.69 (0.91) and 1.19 (1.26) on the ten test cases. On all ten 4DCT test cases, the QPDIR method produced spatial accuracies that are superior or equivalent to those produced by current state-of-the-art methods. Moreover, QPDIR achieved accuracies at the resolution of the landmark error assessment (i.e., the interobserver error) on six of the ten cases. CONCLUSION The QPDIR algorithm is based on a simple quadratic penalty function formulation and a regularization term inspired by leave-one-out cross validation. The formulation lends itself to a parallelizable, gradient-free, block coordinate descent numerical optimization method. Numerical results indicate that the method achieves a high spatial accuracy on 4DCT inhale/exhale phases.
Collapse
Affiliation(s)
- Edward Castillo
- Department of Radiation Oncology, Beaumont Health Systems, Royal Oak, MI, USA.,Department of Computation and Applied Mathematics, Rice University, Houston, TX, USA
| |
Collapse
|
336
|
Pirpinia K, Bosman PAN, Loo CE, Russell NS, van Herk MB, Alderliesten T. Simplex-based navigation tool for a posteriori selection of the preferred deformable image registration outcome from a set of trade-off solutions obtained with multiobjective optimization for the case of breast MRI. J Med Imaging (Bellingham) 2019; 5:045501. [PMID: 30840735 DOI: 10.1117/1.jmi.5.4.045501] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2018] [Accepted: 10/10/2018] [Indexed: 11/14/2022] Open
Abstract
Multiobjective optimization approaches for deformable image registration (DIR) remove the need for manual adjustment of key parameters and provide a set of solutions that represent high-quality trade-offs between objectives of interest. Choosing a desired outcome a posteriori is potentially far more insightful as differences between solutions can be immediately visualized. The purpose of this work is to investigate whether such an approach allows clinical experts to intuitively select their preferred DIR outcome. To this end, we developed a simplex-based tool for solution navigation and asked 10 clinical experts to use it to choose their preferred DIR outcome from sets of trade-off solutions obtained for 10 breast magnetic resonance DIR cases of low (prone-prone DIR; n = 5 ) and high (prone-supine DIR; n = 5 ) difficulty, of patients and volunteers, respectively. The usability of the software is subsequently evaluated by the observers using the system usability scale. Further, the quality of the selected DIR outcomes is evaluated using the mean target registration error. Results show that the users are able to identify and select high-quality DIR outcomes, and attested to high learnability and usability of our software, supporting the validity of the presumed added value of taking a multiobjective perspective on DIR in clinical practice.
Collapse
Affiliation(s)
- Kleopatra Pirpinia
- Netherlands Cancer Institute, Department of Radiation Oncology, Amsterdam, The Netherlands
| | - Peter A N Bosman
- Life Sciences and Health Group, Centrum Wiskunde and Informatica, Amsterdam, The Netherlands
| | - Claudette E Loo
- Netherlands Cancer Institute, Department of Radiology, Amsterdam, The Netherlands
| | - Nicola S Russell
- Netherlands Cancer Institute, Department of Radiation Oncology, Amsterdam, The Netherlands
| | - Marcel B van Herk
- University of Manchester, School of Medical Sciences, Manchester Cancer Research Centre, Manchester Academic Health Sciences Centre, Division of Cancer Science, Faculty of Biology, Medicine and Health, Manchester, United Kingdom
| | - Tanja Alderliesten
- University of Amsterdam, Amsterdam UMC, Department of Radiation Oncology, Amsterdam, The Netherlands
| |
Collapse
|
337
|
Kocev B, Hahn HK, Linsen L, Wells WM, Kikinis R. Uncertainty-aware asynchronous scattered motion interpolation using Gaussian process regression. Comput Med Imaging Graph 2019; 72:1-12. [PMID: 30654093 PMCID: PMC6433137 DOI: 10.1016/j.compmedimag.2018.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 08/16/2018] [Accepted: 12/03/2018] [Indexed: 11/28/2022]
Abstract
We address the problem of interpolating randomly non-uniformly spatiotemporally scattered uncertain motion measurements, which arises in the context of soft tissue motion estimation. Soft tissue motion estimation is of great interest in the field of image-guided soft-tissue intervention and surgery navigation, because it enables the registration of pre-interventional/pre-operative navigation information on deformable soft-tissue organs. To formally define the measurements as spatiotemporally scattered motion signal samples, we propose a novel motion field representation. To perform the interpolation of the motion measurements in an uncertainty-aware optimal unbiased fashion, we devise a novel Gaussian process (GP) regression model with a non-constant-mean prior and an anisotropic covariance function and show through an extensive evaluation that it outperforms the state-of-the-art GP models that have been deployed previously for similar tasks. The employment of GP regression enables the quantification of uncertainty in the interpolation result, which would allow the amount of uncertainty present in the registered navigation information governing the decisions of the surgeon or intervention specialist to be conveyed.
Collapse
Affiliation(s)
- Bojan Kocev
- Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany; Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany; Department of Computer Science and Electrical Engineering, Jacobs University Bremen, Bremen, Germany.
| | - Horst Karl Hahn
- Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany; Department of Computer Science and Electrical Engineering, Jacobs University Bremen, Bremen, Germany
| | - Lars Linsen
- Institute of Computer Science, Westfälische Wilhelms-Universität Münster, Germany
| | - William M Wells
- Department of Radiology, Harvard Medical School and Brigham and Women's Hospital, Boston, MA 02115, USA
| | - Ron Kikinis
- Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany; Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany; Department of Radiology, Harvard Medical School and Brigham and Women's Hospital, Boston, MA 02115, USA
| |
Collapse
|
338
|
Guo Z, Li X, Huang H, Guo N, Li Q. Deep Learning-based Image Segmentation on Multimodal Medical Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2019; 3:162-169. [PMID: 34722958 PMCID: PMC8553020 DOI: 10.1109/trpms.2018.2890359] [Citation(s) in RCA: 134] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2023]
Abstract
Multi-modality medical imaging techniques have been increasingly applied in clinical practice and research studies. Corresponding multi-modal image analysis and ensemble learning schemes have seen rapid growth and bring unique value to medical applications. Motivated by the recent success of applying deep learning methods to medical image processing, we first propose an algorithmic architecture for supervised multi-modal image analysis with cross-modality fusion at the feature learning level, classifier level, and decision-making level. We then design and implement an image segmentation system based on deep Convolutional Neural Networks (CNN) to contour the lesions of soft tissue sarcomas using multi-modal images, including those from Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and Positron Emission Tomography (PET). The network trained with multi-modal images shows superior performance compared to networks trained with single-modal images. For the task of tumor segmentation, performing image fusion within the network (i.e. fusing at convolutional or fully connected layers) is generally better than fusing images at the network output (i.e. voting). This study provides empirical guidance for the design and application of multi-modal image analysis.
Collapse
Affiliation(s)
- Zhe Guo
- School of Information and Electronics, Beijing Institute of Technology, China
| | - Xiang Li
- Massachusetts General Hospital, USA
| | - Heng Huang
- Department of Electrical and Computer Engineering, University of Pittsburgh, USA
| | - Ning Guo
- Massachusetts General Hospital, USA
| | | |
Collapse
|
339
|
A Coarse-to-Fine Registration Strategy for Multi-Sensor Images with Large Resolution Differences. REMOTE SENSING 2019. [DOI: 10.3390/rs11040470] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Automatic image registration for multi-sensors has always been an important task for remote sensing applications. However, registration for images with large resolution differences has not been fully considered. A coarse-to-fine registration strategy for images with large differences in resolution is presented. The strategy consists of three phases. First, the feature-base registration method is applied on the resampled sensed image and the reference image. Edge point features acquired from the edge strength map (ESM) of the images are used to pre-register two images quickly and robustly. Second, normalized mutual information-based registration is applied on the two images for more accurate transformation parameters. Third, the final transform parameters are acquired through direct registration between the original high- and low-resolution images. Ant colony optimization (ACO) for continuous domain is adopted to optimize the similarity metrics throughout the three phases. The proposed method has been tested on image pairs with different resolution ratios from different sensors, including satellite and aerial sensors. Control points (CPs) extracted from the images are used to calculate the registration accuracy of the proposed method and other state-of-the-art methods. The feature-based preregistration validation experiment shows that the proposed method effectively narrows the value range of registration parameters. The registration results indicate that the proposed method performs the best and achieves sub-pixel registration accuracy of images with resolution differences from 1 to 50 times.
Collapse
|
340
|
Ofverstedt J, Lindblad J, Sladoje N. Fast and Robust Symmetric Image Registration Based on Distances Combining Intensity and Spatial Information. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:3584-3597. [PMID: 30794174 DOI: 10.1109/tip.2019.2899947] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Intensity-based image registration approaches rely on similarity measures to guide the search for geometric correspondences with high affinity between images. The properties of the used measure are vital for the robustness and accuracy of the registration. In this study a symmetric, intensity interpolationfree, affine registration framework based on a combination of intensity and spatial information is proposed. The excellent performance of the framework is demonstrated on a combination of synthetic tests, recovering known transformations in the presence of noise, and real applications in biomedical and medical image registration, for both 2D and 3D images. The method exhibits greater robustness and higher accuracy than similarity measures in common use, when inserted into a standard gradientbased registration framework available as part of the open source Insight Segmentation and Registration Toolkit (ITK). The method is also empirically shown to have a low computational cost, making it practical for real applications. Source code is available.
Collapse
|
341
|
Balakrishnan G, Zhao A, Sabuncu MR, Guttag J, Dalca AV. VoxelMorph: A Learning Framework for Deformable Medical Image Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1788-1800. [PMID: 30716034 DOI: 10.1109/tmi.2019.2897538] [Citation(s) in RCA: 632] [Impact Index Per Article: 105.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
We present VoxelMorph, a fast learning-based framework for deformable, pairwise medical image registration. Traditional registration methods optimize an objective function for each pair of images, which can be time-consuming for large datasets or rich deformation models. In contrast to this approach, and building on recent learning-based methods, we formulate registration as a function that maps an input image pair to a deformation field that aligns these images. We parameterize the function via a convolutional neural network (CNN), and optimize the parameters of the neural network on a set of images. Given a new pair of scans, VoxelMorph rapidly computes a deformation field by directly evaluating the function. In this work, we explore two different training strategies. In the first (unsupervised) setting, we train the model to maximize standard image matching objective functions that are based on the image intensities. In the second setting, we leverage auxiliary segmentations available in the training data. We demonstrate that the unsupervised model's accuracy is comparable to state-of-the-art methods, while operating orders of magnitude faster. We also show that VoxelMorph trained with auxiliary data improves registration accuracy at test time, and evaluate the effect of training set size on registration. Our method promises to speed up medical image analysis and processing pipelines, while facilitating novel directions in learning-based registration and its applications. Our code is freely available at https://github.com/voxelmorph/voxelmorph.
Collapse
|
342
|
Mohseni Salehi SS, Khan S, Erdogmus D, Gholipour A. Real-Time Deep Pose Estimation With Geodesic Loss for Image-to-Template Rigid Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:470-481. [PMID: 30138909 PMCID: PMC6438698 DOI: 10.1109/tmi.2018.2866442] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
With an aim to increase the capture range and accelerate the performance of state-of-the-art inter-subject and subject-to-template 3-D rigid registration, we propose deep learning-based methods that are trained to find the 3-D position of arbitrarily-oriented subjects or anatomy in a canonical space based on slices or volumes of medical images. For this, we propose regression convolutional neural networks (CNNs) that learn to predict the angle-axis representation of 3-D rotations and translations using image features. We use and compare mean square error and geodesic loss to train regression CNNs for 3-D pose estimation used in two different scenarios: slice-to-volume registration and volume-to-volume registration. As an exemplary application, we applied the proposed methods to register arbitrarily oriented reconstructed images of fetuses scanned in-utero at a wide gestational age range to a standard atlas space. Our results show that in such registration applications that are amendable to learning, the proposed deep learning methods with geodesic loss minimization achieved 3-D pose estimation with a wide capture range in real-time (<100ms). We also tested the generalization capability of the trained CNNs on an expanded age range and on images of newborn subjects with similar and different MR image contrasts. We trained our models on T2-weighted fetal brain MRI scans and used them to predict the 3-D pose of newborn brains based on T1-weighted MRI scans. We showed that the trained models generalized well for the new domain when we performed image contrast transfer through a conditional generative adversarial network. This indicates that the domain of application of the trained deep regression CNNs can be further expanded to image modalities and contrasts other than those used in training. A combination of our proposed methods with accelerated optimization-based registration algorithms can dramatically enhance the performance of automatic imaging devices and image processing methods of the future.
Collapse
|
343
|
An Image Registration Approach Based on 3D Geometric Projection Similarity of the Human Head. J Med Biol Eng 2019. [DOI: 10.1007/s40846-018-0395-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
344
|
Muenzing SEA, Strauch M, Truman JW, Bühler K, Thum AS, Merhof D. larvalign: Aligning Gene Expression Patterns from the Larval Brain of Drosophila melanogaster. Neuroinformatics 2019; 16:65-80. [PMID: 29127664 PMCID: PMC5797188 DOI: 10.1007/s12021-017-9349-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
The larval brain of the fruit fly Drosophila melanogaster is a small, tractable model system for neuroscience. Genes for fluorescent marker proteins can be expressed in defined, spatially restricted neuron populations. Here, we introduce the methods for 1) generating a standard template of the larval central nervous system (CNS), 2) spatial mapping of expression patterns from different larvae into a reference space defined by the standard template. We provide a manually annotated gold standard that serves for evaluation of the registration framework involved in template generation and mapping. A method for registration quality assessment enables the automatic detection of registration errors, and a semi-automatic registration method allows one to correct registrations, which is a prerequisite for a high-quality, curated database of expression patterns. All computational methods are available within the larvalign software package: https://github.com/larvalign/larvalign/releases/tag/v1.0
Collapse
Affiliation(s)
- Sascha E A Muenzing
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany.,Forschungszentrum Jülich, Institute of Neuroscience and Medicine, Jülich, Germany
| | - Martin Strauch
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| | - James W Truman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.,Friday Harbor Laboratories, University of Washington, Friday Harbor, WA, USA
| | - Katja Bühler
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria
| | - Andreas S Thum
- Department of Biology, University of Konstanz, Constance, Germany.,Zukunftskolleg, University of Konstanz, Constance, Germany.,Department of Genetics, University of Leipzig, Leipzig, Germany
| | - Dorit Merhof
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany.
| |
Collapse
|
345
|
Hernandez M. PDE-constrained LDDMM via geodesic shooting and inexact Gauss-Newton-Krylov optimization using the incremental adjoint Jacobi equations. Phys Med Biol 2019; 64:025002. [PMID: 30523830 DOI: 10.1088/1361-6560/aaf598] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The class of non-rigid registration methods proposed in the framework of PDE-constrained large deformation diffeomorphic metric mapping is a particularly interesting family of physically meaningful diffeomorphic registration methods. Inexact Gauss-Newton-Krylov optimization has shown an excellent numerical accuracy and an extraordinarily fast convergence rate in this framework. However, the Galerkin representation of the non-stationary velocity fields does not provide proper geodesic paths. In this work, we propose a method for PDE-constrained LDDMM parameterized in the space of initial velocity fields under the EPDiff equation. The derivation of the gradient and the Hessian-vector products are performed on the final velocity field and transported backward using the adjoint and the incremental adjoint Jacobi equations. This way, we avoid the complex dependence on the initial velocity field in the computations. We also avoid the computation of the adjoint equation and its incremental counterpart that has been recently identified as a subtle problem in PDE-constrained LDDMM. The proposed method provides geodesics in the framework of PDE-constrained LDDMM, and it shows performance competing with benchmark PDE-constrained LDDMM and EPDiff-LDDMM methods.
Collapse
Affiliation(s)
- Monica Hernandez
- Department of Computer Science, Aragon Institute on Engineering Research (I3A), University of Zaragoza, Zaragoza, Spain
| |
Collapse
|
346
|
Nobnop W, Chitapanarux I, Wanwilairat S, Tharavichitkul E, Lorvidhaya V, Sripan P. Effect of Deformation Methods on the Accuracy of Deformable Image Registration From Kilovoltage CT to Tomotherapy Megavoltage CT. Technol Cancer Res Treat 2019; 18:1533033818821186. [PMID: 30803375 PMCID: PMC6373993 DOI: 10.1177/1533033818821186] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
INTRODUCTION The registration accuracy of megavoltage computed tomography images is limited by low image contrast when compared to that of kilovoltage computed tomography images. Such issues may degrade the deformable image registration accuracy. This study evaluates the deformable image registration from kilovoltage to megavoltage images when using different deformation methods and assessing nasopharyngeal carcinoma patient images. METHODS The kilovoltage and the megavoltage images from the first day and the 20th fractions of the treatment day of 12 patients with nasopharyngeal carcinoma were used to evaluate the deformable image registration application. The deformable image registration image procedures were classified into 3 groups, including kilovoltage to kilovoltage, megavoltage to megavoltage, and kilovoltage to megavoltage. Three deformable image registration methods were employed using the deformable image registration and adaptive radiotherapy software. The validation was compared by volume-based, intensity-based, and deformation field analyses. RESULTS The use of different deformation methods greatly affected the deformable image registration accuracy from kilovoltage to megavoltage. The asymmetric transformation with the demon method was significantly better than other methods and illustrated satisfactory value for adaptive applications. The deformable image registration accuracy from kilovoltage to megavoltage showed no significant difference from the kilovoltage to kilovoltage images when using the appropriate method of registration. CONCLUSIONS The choice of deformation method should be considered when applying the deformable image registration from kilovoltage to megavoltage images. The deformable image registration accuracy from kilovoltage to megavoltage revealed a good agreement in terms of intensity-based, volume-based, and deformation field analyses and showed clinically useful methods for nasopharyngeal carcinoma adaptive radiotherapy in tomotherapy applications.
Collapse
Affiliation(s)
- Wannapha Nobnop
- 1 Division of Radiation Oncology, Department of Radiology, Chiang Mai University, Chiang Mai, Thailand
| | - Imjai Chitapanarux
- 1 Division of Radiation Oncology, Department of Radiology, Chiang Mai University, Chiang Mai, Thailand
| | - Somsak Wanwilairat
- 1 Division of Radiation Oncology, Department of Radiology, Chiang Mai University, Chiang Mai, Thailand
| | - Ekkasit Tharavichitkul
- 1 Division of Radiation Oncology, Department of Radiology, Chiang Mai University, Chiang Mai, Thailand
| | - Vicharn Lorvidhaya
- 1 Division of Radiation Oncology, Department of Radiology, Chiang Mai University, Chiang Mai, Thailand
| | - Patumrat Sripan
- 1 Division of Radiation Oncology, Department of Radiology, Chiang Mai University, Chiang Mai, Thailand
| |
Collapse
|
347
|
Bashiri FS, Baghaie A, Rostami R, Yu Z, D’Souza RM. Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach. J Imaging 2018; 5:5. [PMID: 34470183 PMCID: PMC8320870 DOI: 10.3390/jimaging5010005] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Revised: 12/23/2018] [Accepted: 12/25/2018] [Indexed: 11/16/2022] Open
Abstract
Multi-modal image registration is the primary step in integrating information stored in two or more images, which are captured using multiple imaging modalities. In addition to intensity variations and structural differences between images, they may have partial or full overlap, which adds an extra hurdle to the success of registration process. In this contribution, we propose a multi-modal to mono-modal transformation method that facilitates direct application of well-founded mono-modal registration methods in order to obtain accurate alignment of multi-modal images in both cases, with complete (full) and incomplete (partial) overlap. The proposed transformation facilitates recovering strong scales, rotations, and translations. We explain the method thoroughly and discuss the choice of parameters. For evaluation purposes, the effectiveness of the proposed method is examined and compared with widely used information theory-based techniques using simulated and clinical human brain images with full data. Using RIRE dataset, mean absolute error of 1.37, 1.00, and 1.41 mm are obtained for registering CT images with PD-, T1-, and T2-MRIs, respectively. In the end, we empirically investigate the efficacy of the proposed transformation in registering multi-modal partially overlapped images.
Collapse
Affiliation(s)
- Fereshteh S. Bashiri
- Department of Electrical Engineering, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USA
| | - Ahmadreza Baghaie
- Department of Electrical Engineering, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USA
| | - Reihaneh Rostami
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USA
| | - Zeyun Yu
- Department of Electrical Engineering, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USA
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USA
| | - Roshan M. D’Souza
- Department of Mechanical Engineering, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USA
| |
Collapse
|
348
|
Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z Med Phys 2018; 29:102-127. [PMID: 30553609 DOI: 10.1016/j.zemedi.2018.11.002] [Citation(s) in RCA: 771] [Impact Index Per Article: 110.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 11/19/2018] [Accepted: 11/21/2018] [Indexed: 02/06/2023]
Abstract
What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.
Collapse
Affiliation(s)
- Alexander Selvikvåg Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Department of Computing, Mathematics and Physics, Western Norway University of Applied Sciences, Norway.
| | - Arvid Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Neuroinformatics and Image Analysis Laboratory, Department of Biomedicine, University of Bergen, Norway; Department of Health and Functioning, Western Norway University of Applied Sciences, Norway.
| |
Collapse
|
349
|
Masoumi N, Xiao Y, Rivaz H. ARENA: Inter-modality affine registration using evolutionary strategy. Int J Comput Assist Radiol Surg 2018; 14:441-450. [DOI: 10.1007/s11548-018-1897-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2018] [Accepted: 12/03/2018] [Indexed: 10/27/2022]
|
350
|
de Vos BD, Berendsen FF, Viergever MA, Sokooti H, Staring M, Išgum I. A deep learning framework for unsupervised affine and deformable image registration. Med Image Anal 2018; 52:128-143. [PMID: 30579222 DOI: 10.1016/j.media.2018.11.010] [Citation(s) in RCA: 334] [Impact Index Per Article: 47.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Revised: 11/13/2018] [Accepted: 11/20/2018] [Indexed: 01/12/2023]
Abstract
Image registration, the process of aligning two or more images, is the core technique of many (semi-)automatic medical image analysis tasks. Recent studies have shown that deep learning methods, notably convolutional neural networks (ConvNets), can be used for image registration. Thus far training of ConvNets for registration was supervised using predefined example registrations. However, obtaining example registrations is not trivial. To circumvent the need for predefined examples, and thereby to increase convenience of training ConvNets for image registration, we propose the Deep Learning Image Registration (DLIR) framework for unsupervised affine and deformable image registration. In the DLIR framework ConvNets are trained for image registration by exploiting image similarity analogous to conventional intensity-based image registration. After a ConvNet has been trained with the DLIR framework, it can be used to register pairs of unseen images in one shot. We propose flexible ConvNets designs for affine image registration and for deformable image registration. By stacking multiple of these ConvNets into a larger architecture, we are able to perform coarse-to-fine image registration. We show for registration of cardiac cine MRI and registration of chest CT that performance of the DLIR framework is comparable to conventional image registration while being several orders of magnitude faster.
Collapse
Affiliation(s)
- Bob D de Vos
- Image Sciences Institute, University Medical Center Utrecht and Utrecht University, Utrecht, The Netherlands.
| | - Floris F Berendsen
- Division of Image Processing of the Leiden University Medical Center, Leiden, The Netherlands
| | - Max A Viergever
- Image Sciences Institute, University Medical Center Utrecht and Utrecht University, Utrecht, The Netherlands
| | - Hessam Sokooti
- Division of Image Processing of the Leiden University Medical Center, Leiden, The Netherlands
| | - Marius Staring
- Division of Image Processing of the Leiden University Medical Center, Leiden, The Netherlands
| | - Ivana Išgum
- Image Sciences Institute, University Medical Center Utrecht and Utrecht University, Utrecht, The Netherlands
| |
Collapse
|