1
|
Burton W, Myers C, Stefanovic M, Shelburne K, Rullkoetter P. Scan-Free and Fully Automatic Tracking of Native Knee Anatomy from Dynamic Stereo-Radiography with Statistical Shape and Intensity Models. Ann Biomed Eng 2024; 52:1591-1603. [PMID: 38558356 DOI: 10.1007/s10439-024-03473-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 02/09/2024] [Indexed: 04/04/2024]
Abstract
Kinematic tracking of native anatomy from stereo-radiography provides a quantitative basis for evaluating human movement. Conventional tracking procedures require significant manual effort and call for acquisition and annotation of subject-specific volumetric medical images. The current work introduces a framework for fully automatic tracking of native knee anatomy from dynamic stereo-radiography which forgoes reliance on volumetric scans. The method consists of three computational steps. First, captured radiographs are annotated with segmentation maps and anatomic landmarks using a convolutional neural network. Next, a non-convex polynomial optimization problem formulated from annotated landmarks is solved to acquire preliminary anatomy and pose estimates. Finally, a global optimization routine is performed for concurrent refinement of anatomy and pose. An objective function is maximized which quantifies similarities between masked radiographs and digitally reconstructed radiographs produced from statistical shape and intensity models. The proposed framework was evaluated against manually tracked trials comprising dynamic activities, and additional frames capturing a static knee phantom. Experiments revealed anatomic surface errors routinely below 1.0 mm in both evaluation cohorts. Median absolute errors of individual bone pose estimates were below 1.0∘ or mm for 15 out of 18 degrees of freedom in both evaluation cohorts. Results indicate that accurate pose estimation of native anatomy from stereo-radiography may be performed with significantly reduced manual effort, and without reliance on volumetric scans.
Collapse
Affiliation(s)
- William Burton
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80208, USA.
| | - Casey Myers
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80208, USA
| | - Margareta Stefanovic
- Department of Electrical and Computer Engineering, University of Denver, 2155 E Wesley Ave, Denver, CO, 80208, USA
| | - Kevin Shelburne
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80208, USA
| | - Paul Rullkoetter
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80208, USA
| |
Collapse
|
2
|
Liu M, Martin-Gomez A, Oni JK, Mears SC, Armand M. Towards Visualizing Early-stage Osteonecrosis using Intraoperative Imaging Modalities. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2022; 11:1234-1242. [PMID: 38179232 PMCID: PMC10766436 DOI: 10.1080/21681163.2022.2157329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 11/19/2022] [Indexed: 12/23/2022]
Abstract
Osteonecrosis of the Femoral Head (ONFH) is a progressive disease characterized by the death of bone cells due to the loss of blood supply. Early detection and treatment of this disease are vital in avoiding Total Hip Replacement. Early stages of ONFH can be diagnosed using Magnetic Resonance Imaging (MRI), commonly used intra-operative imaging modalities such as fluoroscopy frequently fail to depict the lesion. Therefore, increasing the difficulty of intra-operative localization of osteonecrosis. This work introduces a novel framework that enables the localization of necrotic lesions in Computed Tomography (CT) as a step toward localizing and visualizing necrotic lesions in intra-operative images. The proposed framework uses Deep Learning algorithms to enable automatic segmentation of femur, pelvis, and necrotic lesions in MRI. An additional step performs semi-automatic segmentation of these anatomies, excluding the necrotic lesions, in CT. A final step performs pairwise registration of the corresponding anatomies, allowing for the localization and visualization of the necrosis in CT. To investigate the feasibility of integrating the proposed framework in the surgical workflow, we conducted experiments on MRIs and CTs containing early-stage ONFH. Our results indicate that the proposed framework is able to segment the anatomical structures of interest and accurately register the femurs and pelvis of the corresponding volumes, allowing for the visualization and localization of the ONFH in CT and generated X-rays, which could enable intra-operative visualization of the necrotic lesions for surgical procedures such as core decompression of the femur.
Collapse
Affiliation(s)
- Mingxu Liu
- Biomechanical- and Image-Guided Surgical Systems (BIGSS), Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Alejandro Martin-Gomez
- Biomechanical- and Image-Guided Surgical Systems (BIGSS), Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Julius K Oni
- Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore, MD, USA
| | - Simon C Mears
- Department of Orthopaedic Surgery, University of Arkansas for Medical Sciences, AR, USA
| | - Mehran Armand
- Biomechanical- and Image-Guided Surgical Systems (BIGSS), Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore, MD, USA
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
3
|
Xu J, Moyer D, Grant PE, Golland P, Iglesias JE, Adalsteinsson E. SVoRT: Iterative Transformer for Slice-to-Volume Registration in Fetal Brain MRI. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13436:3-13. [PMID: 37103480 PMCID: PMC10129054 DOI: 10.1007/978-3-031-16446-0_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/28/2023]
Abstract
Volumetric reconstruction of fetal brains from multiple stacks of MR slices, acquired in the presence of almost unpredictable and often severe subject motion, is a challenging task that is highly sensitive to the initialization of slice-to-volume transformations. We propose a novel slice-to-volume registration method using Transformers trained on synthetically transformed data, which model multiple stacks of MR slices as a sequence. With the attention mechanism, our model automatically detects the relevance between slices and predicts the transformation of one slice using information from other slices. We also estimate the underlying 3D volume to assist slice-to-volume registration and update the volume and transformations alternately to improve accuracy. Results on synthetic data show that our method achieves lower registration error and better reconstruction quality compared with existing state-of-the-art methods. Experiments with real-world MRI data are also performed to demonstrate the ability of the proposed model to improve the quality of 3D reconstruction under severe fetal motion.
Collapse
Affiliation(s)
- Junshen Xu
- Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, USA
| | - Daniel Moyer
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - P Ellen Grant
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Polina Golland
- Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, USA
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Juan Eugenio Iglesias
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
- Harvard Medical School, Boston, MA, USA
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA
| | - Elfar Adalsteinsson
- Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, USA
- Institute for Medical Engineering and Science, MIT, Cambridge, MA, USA
| |
Collapse
|
4
|
Grimm M, Esteban J, Unberath M, Navab N. Pose-Dependent Weights and Domain Randomization for Fully Automatic X-Ray to CT Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2221-2232. [PMID: 33861701 DOI: 10.1109/tmi.2021.3073815] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Fully automatic X-ray to CT registration requires a solid initialization to provide an initial alignment within the capture range of existing intensity-based registrations. This work addresses that need by providing a novel automatic initialization, which enables end to end registration. First, a neural network is trained once to detect a set of anatomical landmarks on simulated X-rays. A domain randomization scheme is proposed to enable the network to overcome the challenge of being trained purely on simulated data and run inference on real X-rays. Then, for each patient CT, a fully-automatic patient-specific landmark extraction scheme is used. It is based on backprojecting and clustering the previously trained network's predictions on a set of simulated X-rays. Next, the network is retrained to detect the new landmarks. Finally the combination of network and 3D landmark locations is used to compute the initialization using a perspective-n-point algorithm. During the computation of the pose, a weighting scheme is introduced to incorporate the confidence of the network in detecting the landmarks. The algorithm is evaluated on the pelvis using both real and simulated x-rays. The mean (± standard deviation) target registration error in millimetres is 4.1 ± 4.3 for simulated X-rays with a success rate of 92% and 4.2 ± 3.9 for real X-rays with a success rate of 86.8%, where a success is defined as a translation error of less than 30 mm .
Collapse
|
5
|
Gu W, Gao C, Grupp R, Fotouhi J, Unberath M. Extended Capture Range of Rigid 2D/3D Registration by Estimating Riemannian Pose Gradients. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2020; 12436:281-291. [PMID: 33145587 PMCID: PMC7605345 DOI: 10.1007/978-3-030-59861-7_29] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Traditional intensity-based 2D/3D registration requires near-perfect initialization in order for image similarity metrics to yield meaningful updates of X-ray pose and reduce the likelihood of getting trapped in a local minimum. The conventional approaches strongly depend on image appearance rather than content, and therefore, fail in revealing large pose offsets that substantially alter the appearance of the same structure. We complement traditional similarity metrics with a convolutional neural network-based (CNN-based) registration solution that captures large-range pose relations by extracting both local and contextual information, yielding meaningful X-ray pose updates without the need for accurate initialization. To register a 2D X-ray image and a 3D CT scan, our CNN accepts a target X-ray image and a digitally reconstructed radiograph at the current pose estimate as input and iteratively outputs pose updates in the direction of the pose gradient on the Riemannian Manifold. Our approach integrates seamlessly with conventional image-based registration frameworks, where long-range relations are captured primarily by our CNN-based method while short-range offsets are recovered accurately with an image similarity-based method. On both synthetic and real X-ray images of the human pelvis, we demonstrate that the proposed method can successfully recover large rotational and translational offsets, irrespective of initialization.
Collapse
Affiliation(s)
- Wenhao Gu
- Johns Hopkins University, Baltimore MD 21218, USA
| | - Cong Gao
- Johns Hopkins University, Baltimore MD 21218, USA
| | - Robert Grupp
- Johns Hopkins University, Baltimore MD 21218, USA
| | | | | |
Collapse
|
6
|
Gao C, Farvardin A, Grupp RB, Bakhtiarinejad M, Ma L, Thies M, Unberath M, Taylor RH, Armand M. Fiducial-Free 2D/3D Registration for Robot-Assisted Femoroplasty. ACTA ACUST UNITED AC 2020; 2:437-446. [PMID: 33763632 DOI: 10.1109/tmrb.2020.3012460] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Femoroplasty is a proposed alternative therapeutic method for preventing osteoporotic hip fractures in the elderly. Previously developed navigation system for femoroplasty required the attachment of an external X-ray fiducial to the femur. We propose a fiducial-free 2D/3D registration pipeline using fluoroscopic images for robot-assisted femoroplasty. Intraoperative fluoroscopic images are taken from multiple views to perform registration of the femur and drilling/injection device. The proposed method was tested through comprehensive simulation and cadaveric studies. Performance was evaluated on the registration error of the femur and the drilling/injection device. In simulations, the proposed approach achieved a mean accuracy of 1.26±0.74 mm for the relative planned injection entry point; 0.63±0.21° and 0.17±0.19° for the femur injection path direction and device guide direction, respectively. In the cadaver studies, a mean error of 2.64 ± 1.10 mm was achieved between the planned entry point and the device guide tip. The biomechanical analysis showed that even with a 4 mm translational deviation from the optimal injection path, the yield load prior to fracture increased by 40.7%. This result suggests that the fiducial-less 2D/3D registration is sufficiently accurate to guide robot assisted femoroplasty.
Collapse
Affiliation(s)
- Cong Gao
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Amirhossein Farvardin
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Robert B Grupp
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Mahsan Bakhtiarinejad
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Liuhong Ma
- Department of Cranio-maxillo-facial Surgery Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, CHN,100144
| | - Mareike Thies
- Pattern Recognition Lab, Friedrich-Alexander-Universitt Erlangen-Nrnberg, Erlangen, Germany 91058
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Russell H Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Mehran Armand
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA 21211; Department of Orthopaedic Surgery and Johns Hopkins Applied Physics Laboratory, Baltimore, MD, USA 21224
| |
Collapse
|
7
|
Grupp RB, Unberath M, Gao C, Hegeman RA, Murphy RJ, Alexander CP, Otake Y, McArthur BA, Armand M, Taylor RH. Automatic annotation of hip anatomy in fluoroscopy for robust and efficient 2D/3D registration. Int J Comput Assist Radiol Surg 2020; 15:759-769. [PMID: 32333361 PMCID: PMC7263976 DOI: 10.1007/s11548-020-02162-7] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 04/03/2020] [Indexed: 11/25/2022]
Abstract
PURPOSE Fluoroscopy is the standard imaging modality used to guide hip surgery and is therefore a natural sensor for computer-assisted navigation. In order to efficiently solve the complex registration problems presented during navigation, human-assisted annotations of the intraoperative image are typically required. This manual initialization interferes with the surgical workflow and diminishes any advantages gained from navigation. In this paper, we propose a method for fully automatic registration using anatomical annotations produced by a neural network. METHODS Neural networks are trained to simultaneously segment anatomy and identify landmarks in fluoroscopy. Training data are obtained using a computationally intensive, intraoperatively incompatible, 2D/3D registration of the pelvis and each femur. Ground truth 2D segmentation labels and anatomical landmark locations are established using projected 3D annotations. Intraoperative registration couples a traditional intensity-based strategy with annotations inferred by the network and requires no human assistance. RESULTS Ground truth segmentation labels and anatomical landmarks were obtained in 366 fluoroscopic images across 6 cadaveric specimens. In a leave-one-subject-out experiment, networks trained on these data obtained mean dice coefficients for left and right hemipelves, left and right femurs of 0.86, 0.87, 0.90, and 0.84, respectively. The mean 2D landmark localization error was 5.0 mm. The pelvis was registered within [Formula: see text] for 86% of the images when using the proposed intraoperative approach with an average runtime of 7 s. In comparison, an intensity-only approach without manual initialization registered the pelvis to [Formula: see text] in 18% of images. CONCLUSIONS We have created the first accurately annotated, non-synthetic, dataset of hip fluoroscopy. By using these annotations as training data for neural networks, state-of-the-art performance in fluoroscopic segmentation and landmark localization was achieved. Integrating these annotations allows for a robust, fully automatic, and efficient intraoperative registration during fluoroscopic navigation of the hip.
Collapse
Affiliation(s)
- Robert B Grupp
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Cong Gao
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Rachel A Hegeman
- Research and Exploratory Development Department, Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA
| | | | - Clayton P Alexander
- Department of Orthopaedic Surgery, Johns Hopkins Medicine, Baltimore, MD, USA
| | - Yoshito Otake
- Graduate School of Information Science, Nara Institute of Science and Technology, Ikoma, Nara, Japan
| | - Benjamin A McArthur
- Department of Surgery and Perioperative Care, Dell Medical School, University of Texas, Austin, TX, USA
- Texas Orthopedics, Austin, TX, USA
| | - Mehran Armand
- Research and Exploratory Development Department, Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins Medicine, Baltimore, MD, USA
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Russell H Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
8
|
Ma Q, Kobayashi E, Fan B, Nakagawa K, Sakuma I, Masamune K, Suenaga H. Automatic 3D landmarking model using patch-based deep neural networks for CT image of oral and maxillofacial surgery. Int J Med Robot 2020; 16:e2093. [PMID: 32065718 DOI: 10.1002/rcs.2093] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2019] [Revised: 02/12/2020] [Accepted: 02/13/2020] [Indexed: 12/15/2022]
Abstract
BACKGROUND Manual landmarking is a time consuming and highly professional work. Although some algorithm-based landmarking methods have been proposed, they lack flexibility and may be susceptible to data diversity. METHODS The CT images from 66 patients who underwent oral and maxillofacial surgery (OMS) were landmarked manually in MIMICS. Then the CT slices were exported as images for recreating the 3D volume. The coordinate data of landmarks were further processed in Matlab using a principal component analysis (PCA) method. A patch-based deep neural network model with a three-layer convolutional neural network (CNN) was trained to obtain landmarks from CT images. RESULTS The evaluating experiment showed that this CNN model could automatically finish landmarking in an average processing time of 37.871 seconds with an average accuracy of 5.785 mm. CONCLUSION This study shows a promising potential to relieve the workload of the surgeon and reduces the dependence on human experience for OMS landmarking.
Collapse
Affiliation(s)
- Qingchuan Ma
- Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| | - Etsuko Kobayashi
- Institute of Advanced BioMedical Engineering and Science, Tokyo Women's Medical University, Tokyo, Japan
| | - Bowen Fan
- Department of Precision Engineering, The University of Tokyo, Tokyo, Japan
| | - Keiichi Nakagawa
- Department of Precision Engineering, The University of Tokyo, Tokyo, Japan
| | - Ichiro Sakuma
- Department of Precision Engineering, The University of Tokyo, Tokyo, Japan
| | - Ken Masamune
- Institute of Advanced BioMedical Engineering and Science, Tokyo Women's Medical University, Tokyo, Japan
| | - Hideyuki Suenaga
- Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| |
Collapse
|