1
|
Han Z, Dou Q. A review on organ deformation modeling approaches for reliable surgical navigation using augmented reality. Comput Assist Surg (Abingdon) 2024; 29:2357164. [PMID: 39253945 DOI: 10.1080/24699322.2024.2357164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/11/2024] Open
Abstract
Augmented Reality (AR) holds the potential to revolutionize surgical procedures by allowing surgeons to visualize critical structures within the patient's body. This is achieved through superimposing preoperative organ models onto the actual anatomy. Challenges arise from dynamic deformations of organs during surgery, making preoperative models inadequate for faithfully representing intraoperative anatomy. To enable reliable navigation in augmented surgery, modeling of intraoperative deformation to obtain an accurate alignment of the preoperative organ model with the intraoperative anatomy is indispensable. Despite the existence of various methods proposed to model intraoperative organ deformation, there are still few literature reviews that systematically categorize and summarize these approaches. This review aims to fill this gap by providing a comprehensive and technical-oriented overview of modeling methods for intraoperative organ deformation in augmented reality in surgery. Through a systematic search and screening process, 112 closely relevant papers were included in this review. By presenting the current status of organ deformation modeling methods and their clinical applications, this review seeks to enhance the understanding of organ deformation modeling in AR-guided surgery, and discuss the potential topics for future advancements.
Collapse
Affiliation(s)
- Zheng Han
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
2
|
You X, Gu Y, Liu Y, Lu S, Tang X, Yang J. VerteFormer: A single-staged Transformer network for vertebrae segmentation from CT images with arbitrary field of views. Med Phys 2023; 50:6296-6318. [PMID: 37211910 DOI: 10.1002/mp.16467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 04/09/2023] [Accepted: 04/27/2023] [Indexed: 05/23/2023] Open
Abstract
BACKGROUND Spinal diseases are burdening an increasing number of patients. And fully automatic vertebrae segmentation for CT images with arbitrary field of views (FOVs), has been a fundamental research for computer-assisted spinal disease diagnosis and surgical intervention. Therefore, researchers aim to solve this challenging task in the past years. PURPOSE This task suffers from challenges including the intra-vertebrae inconsistency of segmentation and the poor identification of biterminal vertebrae in CT scans. And there are some limitations in existing models, which might be difficult to be applied to spinal cases with arbitrary FOVs or employ multi-stage networks with too much computational cost. In this paper, we propose a single-staged model called VerteFormer which can effectively deal with the challenges and limitations mentioned above. METHODS The proposed VerteFormer utilizes the advantage of Vision Transformer (ViT), which does well in mining global relations for input data. The Transformer and UNet-based structure effectively fuse global and local features of vertebrae. Beisdes, we propose the Edge Detection (ED) block based on convolution and self-attention to divide neighboring vertebrae with clear boundary lines. And it simultaneously promotes the network to achieve more consistent segmentation masks of vertebrae. To better identify the labels of vertebrae in the spine, particularly biterminal vertebrae, we further introduce global information generated from the Global Information Extraction (GIE) block. RESULTS We evaluate the proposed model on two public datasets: MICCAI Challenge VerSe 2019 and 2020. And VerteFormer achieve 86.39% and 86.54% of dice scores on the public and hidden test datasets of VerSe 2019, 84.53% and 86.86% of dice scores on VerSe 2020, which outperforms other Transformer-based models and single-staged methods specifically designed for the VerSe Challenge. Additional ablation experiments validate the effectiveness of ViT block, ED block and GIE block. CONCLUSIONS We propose a single-staged Transformer-based model for the task of fully automatic vertebrae segmentation from CT images with arbitrary FOVs. ViT demonstrates its effectiveness in modeling long-term relations. The ED block and GIE block has shown their improvements to the segmentation performance of vertebrae. The proposed model can assist physicians for spinal diseases' diagnosis and surgical intervention, and is also promising to be generalized and transferred to other applications of medical imaging.
Collapse
Affiliation(s)
- Xin You
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Yun Gu
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Yingying Liu
- Research, Technology and Clinical, Medtronic Technology Center, Shanghai, China
| | - Steve Lu
- Visualization and Robotics, Medtronic Technology Center, Shanghai, China
| | - Xin Tang
- Research, Technology and Clinical, Medtronic Technology Center, Shanghai, China
| | - Jie Yang
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
3
|
Lee Y, Lee E, Jang IT. ALiGN: Attention based Line Guided Network for Vertebral Comprssion Fracture Detection. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083125 DOI: 10.1109/embc40787.2023.10340261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Vertebral Compression Fracture (VCF) is one of the common fractures, especially for elderlies. As it affects postural deformation that may cause secondary disorders in the respiratory or digestive system if not treated in time, diagnosis of VCF is crucial. Using deep learning model based detection technology in diagnosis can reduce the workload of healthcare workers and misdiagnosis. Hence in this work, we propose ALiGN, a compression fracture detection model in the lumbar vertebra based on a deep convolutional neural network (CNN). Specifically, we take the location of each vertebral body into account via a feature pyramid network with an attention mechanism. Our proposed model outperforms the earlier works with a sensitivity 0.9729, specificity 0.9914, and mAP 0.7882.
Collapse
|
4
|
CT-Based Automatic Spine Segmentation Using Patch-Based Deep Learning. INT J INTELL SYST 2023. [DOI: 10.1155/2023/2345835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
CT vertebral segmentation plays an essential role in various clinical applications, such as computer-assisted surgical interventions, assessment of spinal abnormalities, and vertebral compression fractures. Automatic CT vertebral segmentation is challenging due to the overlapping shadows of thoracoabdominal structures such as the lungs, bony structures such as the ribs, and other issues such as ambiguous object borders, complicated spine architecture, patient variability, and fluctuations in image contrast. Deep learning is an emerging technique for disease diagnosis in the medical field. This study proposes a patch-based deep learning approach to extract the discriminative features from unlabeled data using a stacked sparse autoencoder (SSAE). 2D slices from a CT volume are divided into overlapping patches fed into the model for training. A random under sampling (RUS)-module is applied to balance the training data by selecting a subset of the majority class. SSAE uses pixel intensities alone to learn high-level features to recognize distinctive features from image patches. Each image is subjected to a sliding window operation to express image patches using autoencoder high-level features, which are then fed into a sigmoid layer to classify whether each patch is a vertebra or not. We validate our approach on three diverse publicly available datasets: VerSe, CSI-Seg, and the Lumbar CT dataset. Our proposed method outperformed other models after configuration optimization by achieving 89.9% in precision, 90.2% in recall, 98.9% in accuracy, 90.4% in F-score, 82.6% in intersection over union (IoU), and 90.2% in Dice coefficient (DC). The results of this study demonstrate that our model’s performance consistency using a variety of validation strategies is flexible, fast, and generalizable, making it suited for clinical application.
Collapse
|
5
|
SVseg: Stacked Sparse Autoencoder-Based Patch Classification Modeling for Vertebrae Segmentation. MATHEMATICS 2022. [DOI: 10.3390/math10050796] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Precise vertebrae segmentation is essential for the image-related analysis of spine pathologies such as vertebral compression fractures and other abnormalities, as well as for clinical diagnostic treatment and surgical planning. An automatic and objective system for vertebra segmentation is required, but its development is likely to run into difficulties such as low segmentation accuracy and the requirement of prior knowledge or human intervention. Recently, vertebral segmentation methods have focused on deep learning-based techniques. To mitigate the challenges involved, we propose deep learning primitives and stacked Sparse autoencoder-based patch classification modeling for Vertebrae segmentation (SVseg) from Computed Tomography (CT) images. After data preprocessing, we extract overlapping patches from CT images as input to train the model. The stacked sparse autoencoder learns high-level features from unlabeled image patches in an unsupervised way. Furthermore, we employ supervised learning to refine the feature representation to improve the discriminability of learned features. These high-level features are fed into a logistic regression classifier to fine-tune the model. A sigmoid classifier is added to the network to discriminate the vertebrae patches from non-vertebrae patches by selecting the class with the highest probabilities. We validated our proposed SVseg model on the publicly available MICCAI Computational Spine Imaging (CSI) dataset. After configuration optimization, our proposed SVseg model achieved impressive performance, with 87.39% in Dice Similarity Coefficient (DSC), 77.60% in Jaccard Similarity Coefficient (JSC), 91.53% in precision (PRE), and 90.88% in sensitivity (SEN). The experimental results demonstrated the method’s efficiency and significant potential for diagnosing and treating clinical spinal diseases.
Collapse
|
6
|
Gong H, Liu J, Chen B, Li S. ResAttenGAN: Simultaneous segmentation of multiple spinal structures on axial lumbar MRI image using residual attention and adversarial learning. Artif Intell Med 2022; 124:102243. [DOI: 10.1016/j.artmed.2022.102243] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 06/18/2021] [Accepted: 01/03/2022] [Indexed: 12/28/2022]
|
7
|
Tao R, Liu W, Zheng G. Spine-transformers: Vertebra labeling and segmentation in arbitrary field-of-view spine CTs via 3D transformers. Med Image Anal 2021; 75:102258. [PMID: 34670147 DOI: 10.1016/j.media.2021.102258] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 08/10/2021] [Accepted: 09/28/2021] [Indexed: 11/26/2022]
Abstract
In this paper, we address the problem of fully automatic labeling and segmentation of 3D vertebrae in arbitrary Field-Of-View (FOV) CT images. We propose a deep learning-based two-stage solution to tackle these two problems. More specifically, in the first stage, the challenging vertebra labeling problem is solved via a novel transformers-based 3D object detector that views automatic detection of vertebrae in arbitrary FOV CT scans as a one-to-one set prediction problem. The main components of the new method, called Spine-Transformers, are a one-to-one set based global loss that forces unique predictions and a light-weighted 3D transformer architecture equipped with a skip connection and learnable positional embeddings for encoder and decoder, respectively. We additionally propose an inscribed sphere-based object detector to replace the regular box-based object detector for a better handling of volume orientation variations. Our method reasons about the relationships of different levels of vertebrae and the global volume context to directly infer all vertebrae in parallel. In the second stage, the segmentation of the identified vertebrae and the refinement of the detected centers are then done by training one single multi-task encoder-decoder network for all vertebrae as the network does not need to identify which vertebra it is working on. The two tasks share a common encoder path but with different decoder paths. Comprehensive experiments are conducted on two public datasets and one in-house dataset. The experimental results demonstrate the efficacy of the present approach.
Collapse
Affiliation(s)
- Rong Tao
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No.800 Dongchuan Road, Shanghai 200240, China
| | - Wenyong Liu
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University) of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China.
| | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No.800 Dongchuan Road, Shanghai 200240, China.
| |
Collapse
|
8
|
Sekuboyina A, Husseini ME, Bayat A, Löffler M, Liebl H, Li H, Tetteh G, Kukačka J, Payer C, Štern D, Urschler M, Chen M, Cheng D, Lessmann N, Hu Y, Wang T, Yang D, Xu D, Ambellan F, Amiranashvili T, Ehlke M, Lamecker H, Lehnert S, Lirio M, Olaguer NPD, Ramm H, Sahu M, Tack A, Zachow S, Jiang T, Ma X, Angerman C, Wang X, Brown K, Kirszenberg A, Puybareau É, Chen D, Bai Y, Rapazzo BH, Yeah T, Zhang A, Xu S, Hou F, He Z, Zeng C, Xiangshang Z, Liming X, Netherton TJ, Mumme RP, Court LE, Huang Z, He C, Wang LW, Ling SH, Huỳnh LD, Boutry N, Jakubicek R, Chmelik J, Mulay S, Sivaprakasam M, Paetzold JC, Shit S, Ezhov I, Wiestler B, Glocker B, Valentinitsch A, Rempfler M, Menze BH, Kirschke JS. VerSe: A Vertebrae labelling and segmentation benchmark for multi-detector CT images. Med Image Anal 2021; 73:102166. [PMID: 34340104 DOI: 10.1016/j.media.2021.102166] [Citation(s) in RCA: 113] [Impact Index Per Article: 28.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 06/25/2021] [Accepted: 07/06/2021] [Indexed: 11/25/2022]
Abstract
Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosis, surgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VerSe content and code can be accessed at: https://github.com/anjany/verse.
Collapse
Affiliation(s)
- Anjany Sekuboyina
- Department of Informatics, Technical University of Munich, Germany; Munich School of BioEngineering, Technical University of Munich, Germany; Department of Neuroradiology, Klinikum Rechts der Isar, Germany.
| | - Malek E Husseini
- Department of Informatics, Technical University of Munich, Germany; Department of Neuroradiology, Klinikum Rechts der Isar, Germany
| | - Amirhossein Bayat
- Department of Informatics, Technical University of Munich, Germany; Department of Neuroradiology, Klinikum Rechts der Isar, Germany
| | | | - Hans Liebl
- Department of Neuroradiology, Klinikum Rechts der Isar, Germany
| | - Hongwei Li
- Department of Informatics, Technical University of Munich, Germany
| | - Giles Tetteh
- Department of Informatics, Technical University of Munich, Germany
| | - Jan Kukačka
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Germany
| | - Christian Payer
- Institute of Computer Graphics and Vision, Graz University of Technology, Austria
| | - Darko Štern
- Gottfried Schatz Research Center: Biophysics, Medical University of Graz, Austria
| | - Martin Urschler
- School of Computer Science, The University of Auckland, New Zealand
| | - Maodong Chen
- Computer Vision Group, iFLYTEK Research South China, China
| | - Dalong Cheng
- Computer Vision Group, iFLYTEK Research South China, China
| | - Nikolas Lessmann
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center Nijmegen, The Netherlands
| | - Yujin Hu
- Shenzhen Research Institute of Big Data, China
| | - Tianfu Wang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Xin Wang
- Department of Electronic Engineering, Fudan University, China; Department of Radiology, University of North Carolina at Chapel Hill, USA
| | | | | | | | | | | | | | | | | | | | - Feng Hou
- Institute of Computing Technology, Chinese Academy of Sciences, China
| | | | | | - Zheng Xiangshang
- College of Computer Science and Technology, Zhejiang University, China; Real Doctor AI Research Centre, Zhejiang University, China
| | - Xu Liming
- College of Computer Science and Technology, Zhejiang University, China
| | | | | | | | - Zixun Huang
- Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, China
| | - Chenhang He
- Department of Computing, The Hong Kong Polytechnic University, China
| | - Li-Wen Wang
- Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, China
| | - Sai Ho Ling
- The School of Biomedical Engineering, University of Technology Sydney, Australia
| | - Lê Duy Huỳnh
- EPITA Research and Development Laboratory (LRDE), France
| | - Nicolas Boutry
- EPITA Research and Development Laboratory (LRDE), France
| | - Roman Jakubicek
- Department of Biomedical Engineering, Brno University of Technology, Czech Republic
| | - Jiri Chmelik
- Department of Biomedical Engineering, Brno University of Technology, Czech Republic
| | - Supriti Mulay
- Indian Institute of Technology Madras, India; Healthcare Technology Innovation Centre, India
| | | | | | - Suprosanna Shit
- Department of Informatics, Technical University of Munich, Germany
| | - Ivan Ezhov
- Department of Informatics, Technical University of Munich, Germany
| | | | - Ben Glocker
- Department of Computing, Imperial College London, UK
| | | | - Markus Rempfler
- Friedrich Miescher Institute for Biomedical Engineering, Switzerland
| | - Björn H Menze
- Department of Informatics, Technical University of Munich, Germany; Department for Quantitative Biomedicine, University of Zurich, Switzerland
| | - Jan S Kirschke
- Department of Neuroradiology, Klinikum Rechts der Isar, Germany
| |
Collapse
|
9
|
Gong H, Liu J, Li S, Chen B. Axial-SpineGAN: simultaneous segmentation and diagnosis of multiple spinal structures on axial magnetic resonance imaging images. Phys Med Biol 2021; 66. [PMID: 33887718 DOI: 10.1088/1361-6560/abfad9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 04/22/2021] [Indexed: 11/12/2022]
Abstract
Providing a simultaneous segmentation and diagnosis of the spinal structures on axial magnetic resonance imaging (MRI) images has significant value for subsequent pathological analyses and clinical treatments. However, this task remains challenging, owing to the significant structural diversity, subtle differences between normal and abnormal structures, implicit borders, and insufficient training data. In this study, we propose an innovative network framework called 'Axial-SpineGAN' comprising a generator, discriminator, and diagnostor, aiming to address the above challenges, and to achieve simultaneous segmentation and disease diagnosis for discs, neural foramens, thecal sacs, and posterior arches on axial MRI images. The generator employs an enhancing feature fusion module to generate discriminative features, i.e. to address the challenges regarding the significant structural diversity and subtle differences between normal and abnormal structures. An enhancing border alignment module is employed to obtain an accurate pixel classification of the implicit borders. The discriminator employs an adversarial learning module to effectively strengthen the higher-order spatial consistency, and to avoid overfitting owing to insufficient training data. The diagnostor employs an automated diagnosis module to provide automated recognition of spinal diseases. Extensive experiments demonstrate that these modules have positive effects on improving the segmentation and diagnosis accuracies. Additionally, the results indicate that Axial-SpineGAN has the highest Dice similarity coefficient (94.9% ± 1.8%) in terms of the segmentation accuracy and highest accuracy rate (93.9% ± 2.6%) in terms of the diagnosis accuracy, thereby outperforming existing state-of-the-art methods. Therefore, our proposed Axial-SpineGAN is effective and potential as a clinical tool for providing an automated segmentation and disease diagnosis for multiple spinal structures on MRI images.
Collapse
Affiliation(s)
- Hao Gong
- Beijing Institute of Technology, School of Mechanical Engineering, 5 South Zhongguancun Street, Haidian District, Beijing, 100081, People's Republic of China
| | - Jianhua Liu
- Beijing Institute of Technology, School of Mechanical Engineering, 5 South Zhongguancun Street, Haidian District, Beijing, 100081, People's Republic of China
| | - Shuo Li
- University of Western, Department of Medical Imaging and Medical Biophysics, London, ON, N6A 5W9, Canada
| | - Bo Chen
- Western University, School of Health Science, London, ON, N6A 4V2, Canada
| |
Collapse
|
10
|
Kim KC, Cho HC, Jang TJ, Choi JM, Seo JK. Automatic detection and segmentation of lumbar vertebrae from X-ray images for compression fracture evaluation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105833. [PMID: 33250283 DOI: 10.1016/j.cmpb.2020.105833] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Accepted: 11/04/2020] [Indexed: 06/12/2023]
Abstract
For compression fracture detection and evaluation, an automatic X-ray image segmentation technique that combines deep-learning and level-set methods is proposed. Automatic segmentation is much more difficult for X-ray images than for CT or MRI images because they contain overlapping shadows of thoracoabdominal structures including lungs, bowel gases, and other bony structures such as ribs. Additional difficulties include unclear object boundaries, the complex shape of the vertebra, inter-patient variability, and variations in image contrast. Accordingly, a structured hierarchical segmentation method is presented that combines the advantages of two deep-learning methods. Pose-driven learning is used to selectively identify the five lumbar vertebrae in an accurate and robust manner. With knowledge of the vertebral positions, M-net is employed to segment the individual vertebra. Finally, fine-tuning segmentation is applied by combining the level-set method with the previously obtained segmentation results. The performance of the proposed method was validated by 160 lumbar X-ray images, resulting in a mean Dice similarity metric of 91.60±2.22%. The results show that the proposed method achieves accurate and robust identification of each lumbar vertebra and fine segmentation of individual vertebra.
Collapse
Affiliation(s)
- Kang Cheol Kim
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| | - Hyun Cheol Cho
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| | - Tae Jun Jang
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| | | | - Jin Keun Seo
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| |
Collapse
|
11
|
Eltes PE, Kiss L, Bartos M, Gyorgy ZM, Csakany T, Bereczki F, Lesko V, Puhl M, Varga PP, Lazary A. Geometrical accuracy evaluation of an affordable 3D printing technology for spine physical models. J Clin Neurosci 2020; 72:438-446. [DOI: 10.1016/j.jocn.2019.12.027] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2019] [Accepted: 12/16/2019] [Indexed: 10/25/2022]
|
12
|
Zhou W, Lin L, Ge G. N-Net: 3D Fully Convolution Network-Based Vertebrae Segmentation from CT Spinal Images. INT J PATTERN RECOGN 2019. [DOI: 10.1142/s0218001419570039] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Accurate vertebrae segmentation from CT spinal images is crucial for the clinical tasks of diagnosis, surgical planning, and post-operative assessment. This paper describes an [Formula: see text]-shaped 3D fully convolution network (FCN) for vertebrae segmentation: [Formula: see text]-net. In this network, a global structure guidance pathway is designed for fusing the high-level semantic features with the global structure information. Moreover, the residual structure and the skip connection are introduced into traditional 3D FCN framework. These schemes can significantly improve the accuracy of vertebrae segmentation. Experimental results demonstrate the effectiveness and robustness of our method. A high average DICE score of 0.9499 [Formula: see text] 0.02 can be obtained, which is better than those of existing methods.
Collapse
Affiliation(s)
- Wenhui Zhou
- School of Computer Science and Technology Hangzhou, Dianzi University, Hangzhou, P. R. China
| | - Lili Lin
- School of Information and Electronic Engineering, Zhejiang Gongshang University, Hangzhou, P. R. China
| | - Guangtao Ge
- School of Information and Electronic Engineering, Zhejiang Gongshang University, Hangzhou, P. R. China
| |
Collapse
|
13
|
Lessmann N, van Ginneken B, de Jong PA, Išgum I. Iterative fully convolutional neural networks for automatic vertebra segmentation and identification. Med Image Anal 2019; 53:142-155. [PMID: 30771712 DOI: 10.1016/j.media.2019.02.005] [Citation(s) in RCA: 120] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Revised: 01/19/2019] [Accepted: 02/11/2019] [Indexed: 12/28/2022]
Abstract
Precise segmentation and anatomical identification of the vertebrae provides the basis for automatic analysis of the spine, such as detection of vertebral compression fractures or other abnormalities. Most dedicated spine CT and MR scans as well as scans of the chest, abdomen or neck cover only part of the spine. Segmentation and identification should therefore not rely on the visibility of certain vertebrae or a certain number of vertebrae. We propose an iterative instance segmentation approach that uses a fully convolutional neural network to segment and label vertebrae one after the other, independently of the number of visible vertebrae. This instance-by-instance segmentation is enabled by combining the network with a memory component that retains information about already segmented vertebrae. The network iteratively analyzes image patches, using information from both image and memory to search for the next vertebra. To efficiently traverse the image, we include the prior knowledge that the vertebrae are always located next to each other, which is used to follow the vertebral column. The network concurrently performs multiple tasks, which are segmentation of a vertebra, regression of its anatomical label and prediction whether the vertebra is completely visible in the image, which allows to exclude incompletely visible vertebrae from further analyses. The predicted anatomical labels of the individual vertebrae are additionally refined with a maximum likelihood approach, choosing the overall most likely labeling if all detected vertebrae are taken into account. This method was evaluated with five diverse datasets, including multiple modalities (CT and MR), various fields of view and coverages of different sections of the spine, and a particularly challenging set of low-dose chest CT scans. For vertebra segmentation, the average Dice score was 94.9 ± 2.1% with an average absolute symmetric surface distance of 0.2 ± 10.1mm. The anatomical identification had an accuracy of 93%, corresponding to a single case with mislabeled vertebrae. Vertebrae were classified as completely or incompletely visible with an accuracy of 97%. The proposed iterative segmentation method compares favorably with state-of-the-art methods and is fast, flexible and generalizable.
Collapse
Affiliation(s)
- Nikolas Lessmann
- Image Sciences Institute, University Medical Center Utrecht, Room Q.02.4.45, 3508 GA Utrecht, P.O. Box 85500, The Netherlands.
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center Nijmegen, The Netherlands
| | - Pim A de Jong
- Department of Radiology, University Medical Center Utrecht, The Netherlands; Utrecht University, The Netherlands
| | - Ivana Išgum
- Image Sciences Institute, University Medical Center Utrecht, Room Q.02.4.45, 3508 GA Utrecht, P.O. Box 85500, The Netherlands
| |
Collapse
|
14
|
Furqan Qadri S, Ai D, Hu G, Ahmad M, Huang Y, Wang Y, Yang J. Automatic Deep Feature Learning via Patch-Based Deep Belief Network for Vertebrae Segmentation in CT Images. APPLIED SCIENCES 2018; 9:69. [DOI: 10.3390/app9010069] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Precise automatic vertebra segmentation in computed tomography (CT) images is important for the quantitative analysis of vertebrae-related diseases but remains a challenging task due to high variation in spinal anatomy among patients. In this paper, we propose a deep learning approach for automatic CT vertebra segmentation named patch-based deep belief networks (PaDBNs). Our proposed PaDBN model automatically selects the features from image patches and then measures the differences between classes and investigates performance. The region of interest (ROI) is obtained from CT images. Unsupervised feature reduction contrastive divergence algorithm is applied for weight initialization, and the weights are optimized by layers in a supervised fine-tuning procedure. The discriminative learning features obtained from the steps above are used as input of a classifier to obtain the likelihood of the vertebrae. Experimental results demonstrate that the proposed PaDBN model can considerably reduce computational cost and produce an excellent performance in vertebra segmentation in terms of accuracy compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Syed Furqan Qadri
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Guoyu Hu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Mubashir Ahmad
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Yong Huang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Yongtian Wang
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
15
|
Belharbi S, Chatelain C, Hérault R, Adam S, Thureau S, Chastan M, Modzelewski R. Spotting L3 slice in CT scans using deep convolutional network and transfer learning. Comput Biol Med 2017; 87:95-103. [PMID: 28558319 DOI: 10.1016/j.compbiomed.2017.05.018] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 05/15/2017] [Accepted: 05/17/2017] [Indexed: 11/16/2022]
Abstract
In this article, we present a complete automated system for spotting a particular slice in a complete 3D Computed Tomography exam (CT scan). Our approach does not require any assumptions on which part of the patient's body is covered by the scan. It relies on an original machine learning regression approach. Our models are learned using the transfer learning trick by exploiting deep architectures that have been pre-trained on imageNet database, and therefore it requires very little annotation for its training. The whole pipeline consists of three steps: i) conversion of the CT scans into Maximum Intensity Projection (MIP) images, ii) prediction from a Convolutional Neural Network (CNN) applied in a sliding window fashion over the MIP image, and iii) robust analysis of the prediction sequence to predict the height of the desired slice within the whole CT scan. Our approach is applied to the detection of the third lumbar vertebra (L3) slice that has been found to be representative to the whole body composition. Our system is evaluated on a database collected in our clinical center, containing 642 CT scans from different patients. We obtained an average localization error of 1.91±2.69 slices (less than 5 mm) in an average time of less than 2.5 s/CT scan, allowing integration of the proposed system into daily clinical routines.
Collapse
Affiliation(s)
- Soufiane Belharbi
- Normandie Univ, UNIROUEN, UNIHAVRE, INSA Rouen, LITIS, 76000, Rouen, France
| | - Clément Chatelain
- Normandie Univ, UNIROUEN, UNIHAVRE, INSA Rouen, LITIS, 76000, Rouen, France
| | - Romain Hérault
- Normandie Univ, UNIROUEN, UNIHAVRE, INSA Rouen, LITIS, 76000, Rouen, France
| | - Sébastien Adam
- Normandie Univ, UNIROUEN, UNIHAVRE, INSA Rouen, LITIS, 76000, Rouen, France.
| | - Sébastien Thureau
- Henri Becquerel Center, Department of Radiotherapy, 76000, Rouen, France; Normandie Univ, UNIROUEN, UNIHAVRE, INSA Rouen, LITIS, 76000, Rouen, France
| | - Mathieu Chastan
- Henri Becquerel Center, Department of Nuclear Medicine, 76000, Rouen, France
| | - Romain Modzelewski
- Normandie Univ, UNIROUEN, UNIHAVRE, INSA Rouen, LITIS, 76000, Rouen, France; Henri Becquerel Center, Department of Nuclear Medicine, 76000, Rouen, France
| |
Collapse
|
16
|
Measurement and Geometric Modelling of Human Spine Posture for Medical Rehabilitation Purposes Using a Wearable Monitoring System Based on Inertial Sensors. SENSORS 2016; 17:s17010003. [PMID: 28025480 PMCID: PMC5298576 DOI: 10.3390/s17010003] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2016] [Revised: 11/28/2016] [Accepted: 12/11/2016] [Indexed: 12/03/2022]
Abstract
This paper presents a mathematical model that can be used to virtually reconstruct the posture of the human spine. By using orientation angles from a wearable monitoring system based on inertial sensors, the model calculates and represents the curvature of the spine. Several hypotheses are taken into consideration to increase the model precision. An estimation of the postures that can be calculated is also presented. A non-invasive solution to identify the human back shape can help reducing the time needed for medical rehabilitation sessions. Moreover, it prevents future problems caused by poor posture.
Collapse
|
17
|
Kadoury S, Labelle H, Parent S. Postoperative 3D spine reconstruction by navigating partitioning manifolds. Med Phys 2016; 43:1045-56. [PMID: 26936692 DOI: 10.1118/1.4940792] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The postoperative evaluation of scoliosis patients undergoing corrective treatment is an important task to assess the strategy of the spinal surgery. Using accurate 3D geometric models of the patient's spine is essential to measure longitudinal changes in the patient's anatomy. On the other hand, reconstructing the spine in 3D from postoperative radiographs is a challenging problem due to the presence of instrumentation (metallic rods and screws) occluding vertebrae on the spine. METHODS This paper describes the reconstruction problem by searching for the optimal model within a manifold space of articulated spines learned from a training dataset of pathological cases who underwent surgery. The manifold structure is implemented based on a multilevel manifold ensemble to structure the data, incorporating connections between nodes within a single manifold, in addition to connections between different multilevel manifolds, representing subregions with similar characteristics. RESULTS The reconstruction pipeline was evaluated on x-ray datasets from both preoperative patients and patients with spinal surgery. By comparing the method to ground-truth models, a 3D reconstruction accuracy of 2.24 ± 0.90 mm was obtained from 30 postoperative scoliotic patients, while handling patients with highly deformed spines. CONCLUSIONS This paper illustrates how this manifold model can accurately identify similar spine models by navigating in the low-dimensional space, as well as computing nonlinear charts within local neighborhoods of the embedded space during the testing phase. This technique allows postoperative follow-ups of spinal surgery using personalized 3D spine models and assess surgical strategies for spinal deformities.
Collapse
Affiliation(s)
- Samuel Kadoury
- Department of Computer and Software Engineering, Ecole Polytechnique Montreal, Montréal, Québec H3C 3A7, Canada
| | - Hubert Labelle
- CHU Sainte‐Justine Hospital Research Center, Montréal, Québec H3T 1C5, Canada
| | - Stefan Parent
- CHU Sainte‐Justine Hospital Research Center, Montréal, Québec H3T 1C5, Canada
| |
Collapse
|
18
|
Paragios N, Ferrante E, Glocker B, Komodakis N, Parisot S, Zacharaki EI. (Hyper)-graphical models in biomedical image analysis. Med Image Anal 2016; 33:102-106. [DOI: 10.1016/j.media.2016.06.028] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Revised: 06/16/2016] [Accepted: 06/16/2016] [Indexed: 11/29/2022]
|
19
|
Liao S, Zhan Y, Dong Z, Yan R, Gong L, Zhou XS, Salganicoff M, Fei J. Automatic Lumbar Spondylolisthesis Measurement in CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1658-1669. [PMID: 26849859 DOI: 10.1109/tmi.2016.2523452] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Lumbar spondylolisthesis is one of the most common spinal diseases. It is caused by the anterior shift of a lumbar vertebrae relative to subjacent vertebrae. In current clinical practices, staging of spondylolisthesis is often conducted in a qualitative way. Although meyerding grading opens the door to stage spondylolisthesis in a more quantitative way, it relies on the manual measurement, which is time consuming and irreproducible. Thus, an automatic measurement algorithm becomes desirable for spondylolisthesis diagnosis and staging. However, there are two challenges. 1) Accurate detection of the most anterior and posterior points on the superior and inferior surfaces of each lumbar vertebrae. Due to the small size of the vertebrae, slight errors of detection may lead to significant measurement errors, hence, wrong disease stages. 2) Automatic localize and label each lumbar vertebrae is required to provide the semantic meaning of the measurement. It is difficult since different lumbar vertebraes have high similarity of both shape and image appearance. To resolve these challenges, a new auto measurement framework is proposed with two major contributions: First, a learning based spine labeling method that integrates both the image appearance and spine geometry information is designed to detect lumbar vertebrae. Second, a hierarchical method using both the population information from atlases and domain-specific information in the target image is proposed for most anterior and posterior points positioning. Validated on 258 CT spondylolisthesis patients, our method shows very similar results to manual measurements by radiologists and significantly increases the measurement efficiency.
Collapse
|
20
|
Yao J, Burns JE, Forsberg D, Seitel A, Rasoulian A, Abolmaesumi P, Hammernik K, Urschler M, Ibragimov B, Korez R, Vrtovec T, Castro-Mateos I, Pozo JM, Frangi AF, Summers RM, Li S. A multi-center milestone study of clinical vertebral CT segmentation. Comput Med Imaging Graph 2016; 49:16-28. [PMID: 26878138 DOI: 10.1016/j.compmedimag.2015.12.006] [Citation(s) in RCA: 83] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2015] [Revised: 10/22/2015] [Accepted: 12/27/2015] [Indexed: 11/28/2022]
Abstract
A multiple center milestone study of clinical vertebra segmentation is presented in this paper. Vertebra segmentation is a fundamental step for spinal image analysis and intervention. The first half of the study was conducted in the spine segmentation challenge in 2014 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) Workshop on Computational Spine Imaging (CSI 2014). The objective was to evaluate the performance of several state-of-the-art vertebra segmentation algorithms on computed tomography (CT) scans using ten training and five testing dataset, all healthy cases; the second half of the study was conducted after the challenge, where additional 5 abnormal cases are used for testing to evaluate the performance under abnormal cases. Dice coefficients and absolute surface distances were used as evaluation metrics. Segmentation of each vertebra as a single geometric unit, as well as separate segmentation of vertebra substructures, was evaluated. Five teams participated in the comparative study. The top performers in the study achieved Dice coefficient of 0.93 in the upper thoracic, 0.95 in the lower thoracic and 0.96 in the lumbar spine for healthy cases, and 0.88 in the upper thoracic, 0.89 in the lower thoracic and 0.92 in the lumbar spine for osteoporotic and fractured cases. The strengths and weaknesses of each method as well as future suggestion for improvement are discussed. This is the first multi-center comparative study for vertebra segmentation methods, which will provide an up-to-date performance milestone for the fast growing spinal image analysis and intervention.
Collapse
Affiliation(s)
- Jianhua Yao
- Imaging Biomarkers and Computer-Aided Detection Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892, USA
| | - Joseph E Burns
- Department of Radiological Sciences, University of California, Irvine, CA 92868, USA
| | - Daniel Forsberg
- Sectra, Linköping, Sweden & Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| | - Alexander Seitel
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Abtin Rasoulian
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Kerstin Hammernik
- Institute for Computer Graphics and Vision, BioTechMed, Graz University of Technology, Graz, Austria
| | - Martin Urschler
- Ludwig Boltzmann Institute for Clinical Forensic Imaging, Graz, Austria
| | - Bulat Ibragimov
- University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
| | - Robert Korez
- University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
| | - Tomaž Vrtovec
- University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
| | - Isaac Castro-Mateos
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), Department of Mechanical Engineering, University of Sheffield, Sheffield, UK
| | - Jose M Pozo
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), Department of Mechanical Engineering, University of Sheffield, Sheffield, UK
| | - Alejandro F Frangi
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), Department of Mechanical Engineering, University of Sheffield, Sheffield, UK
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Detection Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892, USA
| | - Shuo Li
- GE Healthcare & University of Western Ontario, London, ON, Canada.
| |
Collapse
|
21
|
Zheng G, Li S. Medical image computing in diagnosis and intervention of spinal diseases. Comput Med Imaging Graph 2015; 45:99-101. [PMID: 26364266 DOI: 10.1016/j.compmedimag.2015.08.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2015] [Accepted: 08/22/2015] [Indexed: 11/26/2022]
Abstract
Spinal image analysis and computer assisted intervention have emerged as new and independent research areas, due to the importance of treatment of spinal diseases, increasing availability of spinal imaging, and advances in analytics and navigation tools. Among others, multiple modality spinal image analysis and spinal navigation tools have emerged as two keys in this new area. We believe that further focused research in these two areas will lead to a much more efficient and accelerated research path, avoiding detours that exist in other applications, such as in brain and heart.
Collapse
Affiliation(s)
- Guoyan Zheng
- Institute for Surgical Technology and Biomechanics (ISTB), The University of Bern, Stauffacherstrasse 78, 3014 Bern, Switzerland
| | - Shuo Li
- The University of Western Ontario, London, ON, Canada; The Digital Imaging Group of London, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada.
| |
Collapse
|
22
|
Korez R, Ibragimov B, Likar B, Pernuš F, Vrtovec T. A Framework for Automated Spine and Vertebrae Interpolation-Based Detection and Model-Based Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1649-1662. [PMID: 25585415 DOI: 10.1109/tmi.2015.2389334] [Citation(s) in RCA: 54] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Automated and semi-automated detection and segmentation of spinal and vertebral structures from computed tomography (CT) images is a challenging task due to a relatively high degree of anatomical complexity, presence of unclear boundaries and articulation of vertebrae with each other, as well as due to insufficient image spatial resolution, partial volume effects, presence of image artifacts, intensity variations and low signal-to-noise ratio. In this paper, we describe a novel framework for automated spine and vertebrae detection and segmentation from 3-D CT images. A novel optimization technique based on interpolation theory is applied to detect the location of the whole spine in the 3-D image and, using the obtained location of the whole spine, to further detect the location of individual vertebrae within the spinal column. The obtained vertebra detection results represent a robust and accurate initialization for the subsequent segmentation of individual vertebrae, which is performed by an improved shape-constrained deformable model approach. The framework was evaluated on two publicly available CT spine image databases of 50 lumbar and 170 thoracolumbar vertebrae. Quantitative comparison against corresponding reference vertebra segmentations yielded an overall mean centroid-to-centroid distance of 1.1 mm and Dice coefficient of 83.6% for vertebra detection, and an overall mean symmetric surface distance of 0.3 mm and Dice coefficient of 94.6% for vertebra segmentation. The results indicate that by applying the proposed automated detection and segmentation framework, vertebrae can be successfully detected and accurately segmented in 3-D from CT spine images.
Collapse
|
23
|
Cai Y, Osman S, Sharma M, Landis M, Li S. Multi-Modality Vertebra Recognition in Arbitrary Views Using 3D Deformable Hierarchical Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1676-1693. [PMID: 25594966 DOI: 10.1109/tmi.2015.2392054] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Computer-aided diagnosis of spine problems relies on the automatic identification of spine structures in images. The task of automatic vertebra recognition is to identify the global spine and local vertebra structural information such as spine shape, vertebra location and pose. Vertebra recognition is challenging due to the large appearance variations in different image modalities/views and the high geometric distortions in spine shape. Existing vertebra recognitions are usually simplified as vertebrae detections, which mainly focuses on the identification of vertebra locations and labels but cannot support further spine quantitative assessment. In this paper, we propose a vertebra recognition method using 3D deformable hierarchical model (DHM) to achieve cross-modality local vertebra location+pose identification with accurate vertebra labeling, and global 3D spine shape recovery. We recast vertebra recognition as deformable model matching, fitting the input spine images with the 3D DHM via deformations. The 3D model-matching mechanism provides a more comprehensive vertebra location+pose+label simultaneous identification than traditional vertebra location+label detection, and also provides an articulated 3D mesh model for the input spine section. Moreover, DHM can conduct versatile recognition on volume and multi-slice data, even on single slice. Experiments show our method can successfully extract vertebra locations, labels, and poses from multi-slice T1/T2 MR and volume CT, and can reconstruct 3D spine model on different image views such as lumbar, cervical, even whole spine. The resulting vertebra information and the recovered shape can be used for quantitative diagnosis of spine problems and can be easily digitalized and integrated in modern medical PACS systems.
Collapse
|
24
|
Korez R, Ibragimov B, Likar B, Pernuš F, Vrtovec T. An Improved Shape-Constrained Deformable Model for Segmentation of Vertebrae from CT Lumbar Spine Images. RECENT ADVANCES IN COMPUTATIONAL METHODS AND CLINICAL APPLICATIONS FOR SPINE IMAGING 2015. [DOI: 10.1007/978-3-319-14148-0_8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
25
|
Hammernik K, Ebner T, Stern D, Urschler M, Pock T. Vertebrae Segmentation in 3D CT Images Based on a Variational Framework. RECENT ADVANCES IN COMPUTATIONAL METHODS AND CLINICAL APPLICATIONS FOR SPINE IMAGING 2015. [DOI: 10.1007/978-3-319-14148-0_20] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
26
|
Ibragimov B, Likar B, Pernuš F, Vrtovec T. Shape representation for efficient landmark-based segmentation in 3-d. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:861-874. [PMID: 24710155 DOI: 10.1109/tmi.2013.2296976] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, we propose a novel approach to landmark-based shape representation that is based on transportation theory, where landmarks are considered as sources and destinations, all possible landmark connections as roads, and established landmark connections as goods transported via these roads. Landmark connections, which are selectively established, are identified through their statistical properties describing the shape of the object of interest, and indicate the least costly roads for transporting goods from sources to destinations. From such a perspective, we introduce three novel shape representations that are combined with an existing landmark detection algorithm based on game theory. To reduce computational complexity, which results from the extension from 2-D to 3-D segmentation, landmark detection is augmented by a concept known in game theory as strategy dominance. The novel shape representations, game-theoretic landmark detection and strategy dominance are combined into a segmentation framework that was evaluated on 3-D computed tomography images of lumbar vertebrae and femoral heads. The best shape representation yielded symmetric surface distance of 0.75 mm and 1.11 mm, and Dice coefficient of 93.6% and 96.2% for lumbar vertebrae and femoral heads, respectively. By applying strategy dominance, the computational costs were further reduced for up to three times.
Collapse
|
27
|
Rasoulian A, Rohling R, Abolmaesumi P. Lumbar spine segmentation using a statistical multi-vertebrae anatomical shape+pose model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1890-1900. [PMID: 23771318 DOI: 10.1109/tmi.2013.2268424] [Citation(s) in RCA: 75] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Segmentation of the spinal column from computed tomography (CT) images is a preprocessing step for a range of image-guided interventions. One intervention that would benefit from accurate segmentation is spinal needle injection. Previous spinal segmentation techniques have primarily focused on identification and separate segmentation of each vertebra. Recently, statistical multi-object shape models have been introduced to extract common statistical characteristics between several anatomies. These models can be used for segmentation purposes because they are robust, accurate, and computationally tractable. In this paper, we develop a statistical multi-vertebrae shape+pose model and propose a novel registration-based technique to segment the CT images of spine. The multi-vertebrae statistical model captures the variations in shape and pose simultaneously, which reduces the number of registration parameters. We validate our technique in terms of accuracy and robustness of multi-vertebrae segmentation of CT images acquired from lumbar vertebrae of 32 subjects. The mean error of the proposed technique is below 2 mm, which is sufficient for many spinal needle injection procedures, such as facet joint injections.
Collapse
|
28
|
Kadoury S, Labelle H, Paragios N. Spine segmentation in medical images using manifold embeddings and higher-order MRFs. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1227-1238. [PMID: 23629848 DOI: 10.1109/tmi.2013.2244903] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We introduce a novel approach for segmenting articulated spine shape models from medical images. A nonlinear low-dimensional manifold is created from a training set of mesh models to establish the patterns of global shape variations. Local appearance is captured from neighborhoods in the manifold once the overall representation converges. Inference with respect to the manifold and shape parameters is performed using a higher-order Markov random field (HOMRF). Singleton and pairwise potentials measure the support from the global data and shape coherence in manifold space respectively, while higher-order cliques encode geometrical modes of variation to segment each localized vertebra models. Generic feature functions learned from ground-truth data assigns costs to the higher-order terms. Optimization of the model parameters is achieved using efficient linear programming and duality. The resulting model is geometrically intuitive, captures the statistical distribution of the underlying manifold and respects image support. Clinical experiments demonstrated promising results in terms of spine segmentation. Quantitative comparison to expert identification yields an accuracy of 1.6 ± 0.6 mm for CT imaging and of 2.0 ± 0.8 mm for MR imaging, based on the localization of anatomical landmarks.
Collapse
Affiliation(s)
- Samuel Kadoury
- École Polytechnique de Montréal, Montréal, QC, H3C 3A7 Canada, and also with the Sainte-Justine Hospital Research Center, Montréal, QC, H3T 1C5 Canada.
| | | | | |
Collapse
|
29
|
Robust MR spine detection using hierarchical learning and local articulated model. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2012; 15:141-8. [PMID: 23285545 DOI: 10.1007/978-3-642-33415-3_18] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
A clinically acceptable auto-spine detection system, i.e., localization and labeling of vertebrae and inter-vertebral discs, is required to have high robustness, in particular to severe diseases (e.g., scoliosis) and imaging artifacts (e.g. metal artifacts in MR). Our method aims to achieve this goal with two novel components. First, instead of treating vertebrae/discs as either repetitive components or completely independent entities, we emulate a radiologist and use a hierarchial strategy to learn detectors dedicated to anchor (distinctive) vertebrae, bundle (non-distinctive) vertebrae and inter-vertebral discs, respectively. At run-time, anchor vertebrae are detected concurrently to provide redundant and distributed appearance cues robust to local imaging artifacts. Bundle vertebrae detectors provide candidates of vertebrae with subtle appearance differences, whose labels are mutually determined by anchor vertebrae to gain additional robustness. Disc locations are derived from a cloud of responses from disc detectors, which is robust to sporadic voxel-level errors. Second, owing to the non-rigidness of spine anatomies, we employ a local articulated model to effectively model the spatial relations across vertebrae and discs. The local articulated model fuses appearance cues from different detectors in a way that is robust to abnormal spine geometry resulting from severe diseases. Our method is validated by 300 MR spine scout scans and exhibits robust performance, especially to cases with severe diseases and imaging artifacts.
Collapse
|