1
|
Zhang Z, Liu T, Fan G, Li N, Li B, Pu Y, Feng Q, Zhou S. SpineMamba: Enhancing 3D spinal segmentation in clinical imaging through residual visual Mamba layers and shape priors. Comput Med Imaging Graph 2025; 123:102531. [PMID: 40154009 DOI: 10.1016/j.compmedimag.2025.102531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Revised: 03/03/2025] [Accepted: 03/13/2025] [Indexed: 04/01/2025]
Abstract
Accurate segmentation of three-dimensional (3D) clinical medical images is critical for the diagnosis and treatment of spinal diseases. However, the complexity of spinal anatomy and the inherent uncertainties of current imaging technologies pose significant challenges for the semantic segmentation of spinal images. Although convolutional neural networks (CNNs) and Transformer-based models have achieved remarkable progress in spinal segmentation, their limitations in modeling long-range dependencies hinder further improvements in segmentation accuracy. To address these challenges, we propose a novel framework, SpineMamba, which incorporates a residual visual Mamba layer capable of effectively capturing and modeling the deep semantic features and long-range spatial dependencies in 3D spinal data. To further enhance the structural semantic understanding of the vertebrae, we also propose a novel spinal shape prior module that captures specific anatomical information about the spine from medical images, significantly enhancing the model's ability to extract structural semantic information of the vertebrae. Extensive comparative and ablation experiments across three datasets demonstrate that SpineMamba outperforms existing state-of-the-art models. On two computed tomography (CT) datasets, the average Dice similarity coefficients achieved are 94.40±4% and 88.28±3%, respectively, while on a magnetic resonance (MR) dataset, the model achieves a Dice score of 86.95±10%. Notably, SpineMamba surpasses the widely recognized nnU-Net in segmentation accuracy, with a maximum improvement of 3.63 percentage points. These results highlight the precision, robustness, and exceptional generalization capability of SpineMamba.
Collapse
Affiliation(s)
- Zhiqing Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Tianyong Liu
- Institute of Unconventional Oil & Gas Research, Northeast Petroleum University, Street 15, Daqing, 163318, China
| | - Guojia Fan
- College Of Information Science and Engineering, Northeastern University, Liaoning, 110819, China
| | - Na Li
- Department of Biomedical Engineering, Guangdong Medical University, Dongguan, 523808, China
| | - Bin Li
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yao Pu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China.
| | - Shoujun Zhou
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
2
|
Li H, Yang J, Xuan Z, Qu M, Wang Y, Feng C. A spatio-temporal graph convolutional network for ultrasound echocardiographic landmark detection. Med Image Anal 2024; 97:103272. [PMID: 39024972 DOI: 10.1016/j.media.2024.103272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 07/07/2024] [Accepted: 07/08/2024] [Indexed: 07/20/2024]
Abstract
Landmark detection is a crucial task in medical image analysis, with applications across various fields. However, current methods struggle to accurately locate landmarks in medical images with blurred tissue boundaries due to low image quality. In particular, in echocardiography, sparse annotations make it challenging to predict landmarks with position stability and temporal consistency. In this paper, we propose a spatio-temporal graph convolutional network tailored for echocardiography landmark detection. We specifically sample landmark labels from the left ventricular endocardium and pre-calculate their correlations to establish structural priors. Our approach involves a graph convolutional neural network that learns the interrelationships among landmarks, significantly enhancing landmark accuracy within ambiguous tissue contexts. Additionally, we integrate gate recurrent units to grasp the temporal consistency of landmarks across consecutive images, augmenting the model's resilience against unlabeled data. Through validation across three echocardiography datasets, our method demonstrates superior accuracy when contrasted with alternative landmark detection models.
Collapse
Affiliation(s)
- Honghe Li
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, China
| | - Jinzhu Yang
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, China.
| | - Zhanfeng Xuan
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, China
| | - Mingjun Qu
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, China
| | - Yonghuai Wang
- Department of Cardiovascular Ultrasound, The First Hospital of China Medical University, China
| | - Chaolu Feng
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, China
| |
Collapse
|
3
|
Lu X, Cui Z, Sun Y, Guan Khor H, Sun A, Ma L, Chen F, Gao S, Tian Y, Zhou F, Lv Y, Liao H. Better Rough Than Scarce: Proximal Femur Fracture Segmentation With Rough Annotations. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3240-3252. [PMID: 38652607 DOI: 10.1109/tmi.2024.3392854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
Proximal femoral fracture segmentation in computed tomography (CT) is essential in the preoperative planning of orthopedic surgeons. Recently, numerous deep learning-based approaches have been proposed for segmenting various structures within CT scans. Nevertheless, distinguishing various attributes between fracture fragments and soft tissue regions in CT scans frequently poses challenges, which have received comparatively limited research attention. Besides, the cornerstone of contemporary deep learning methodologies is the availability of annotated data, while detailed CT annotations remain scarce. To address the challenge, we propose a novel weakly-supervised framework, namely Rough Turbo Net (RT-Net), for the segmentation of proximal femoral fractures. We emphasize the utilization of human resources to produce rough annotations on a substantial scale, as opposed to relying on limited fine-grained annotations that demand a substantial time to create. In RT-Net, rough annotations pose fractured-region constraints, which have demonstrated significant efficacy in enhancing the accuracy of the network. Conversely, the fine annotations can provide more details for recognizing edges and soft tissues. Besides, we design a spatial adaptive attention module (SAAM) that adapts to the spatial distribution of the fracture regions and align feature in each decoder. Moreover, we propose a fine-edge loss which is applied through an edge discrimination network to penalize the absence or imprecision edge features. Extensive quantitative and qualitative experiments demonstrate the superiority of RT-Net to state-of-the-art approaches. Furthermore, additional experiments show that RT-Net has the capability to produce pseudo labels for raw CT images that can further improve fracture segmentation performance and has the potential to improve segmentation performance on public datasets. The code is available at: https://github.com/zyairelu/RT-Net.
Collapse
|
4
|
Constant C, Aubin CE, Kremers HM, Garcia DVV, Wyles CC, Rouzrokh P, Larson AN. The use of deep learning in medical imaging to improve spine care: A scoping review of current literature and clinical applications. NORTH AMERICAN SPINE SOCIETY JOURNAL 2023; 15:100236. [PMID: 37599816 PMCID: PMC10432249 DOI: 10.1016/j.xnsj.2023.100236] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 06/14/2023] [Indexed: 08/22/2023]
Abstract
Background Artificial intelligence is a revolutionary technology that promises to assist clinicians in improving patient care. In radiology, deep learning (DL) is widely used in clinical decision aids due to its ability to analyze complex patterns and images. It allows for rapid, enhanced data, and imaging analysis, from diagnosis to outcome prediction. The purpose of this study was to evaluate the current literature and clinical utilization of DL in spine imaging. Methods This study is a scoping review and utilized the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to review the scientific literature from 2012 to 2021. A search in PubMed, Web of Science, Embased, and IEEE Xplore databases with syntax specific for DL and medical imaging in spine care applications was conducted to collect all original publications on the subject. Specific data was extracted from the available literature, including algorithm application, algorithms tested, database type and size, algorithm training method, and outcome of interest. Results A total of 365 studies (total sample of 232,394 patients) were included and grouped into 4 general applications: diagnostic tools, clinical decision support tools, automated clinical/instrumentation assessment, and clinical outcome prediction. Notable disparities exist in the selected algorithms and the training across multiple disparate databases. The most frequently used algorithms were U-Net and ResNet. A DL model was developed and validated in 92% of included studies, while a pre-existing DL model was investigated in 8%. Of all developed models, only 15% of them have been externally validated. Conclusions Based on this scoping review, DL in spine imaging is used in a broad range of clinical applications, particularly for diagnosing spinal conditions. There is a wide variety of DL algorithms, database characteristics, and training methods. Future studies should focus on external validation of existing models before bringing them into clinical use.
Collapse
Affiliation(s)
- Caroline Constant
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Polytechnique Montreal, 2500 Chem. de Polytechnique, Montréal, QC H3T 1J4, Canada
- AO Research Institute Davos, Clavadelerstrasse 8, CH 7270, Davos, Switzerland
| | - Carl-Eric Aubin
- Polytechnique Montreal, 2500 Chem. de Polytechnique, Montréal, QC H3T 1J4, Canada
| | - Hilal Maradit Kremers
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
| | - Diana V. Vera Garcia
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
| | - Cody C. Wyles
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Department of Orthopedic Surgery, Mayo Clinic, 200, 1st St Southwest, Rochester, MN, 55902, United States
| | - Pouria Rouzrokh
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Radiology Informatics Laboratory, Mayo Clinic, 200, 1st St Southwest, Rochester, MN, 55902, United States
| | - Annalise Noelle Larson
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Department of Orthopedic Surgery, Mayo Clinic, 200, 1st St Southwest, Rochester, MN, 55902, United States
| |
Collapse
|
5
|
Liu J, Xiao H, Fan J, Hu W, Yang Y, Dong P, Xing L, Cai J. An overview of artificial intelligence in medical physics and radiation oncology. JOURNAL OF THE NATIONAL CANCER CENTER 2023; 3:211-221. [PMID: 39035195 PMCID: PMC11256546 DOI: 10.1016/j.jncc.2023.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Revised: 05/03/2023] [Accepted: 08/08/2023] [Indexed: 07/23/2024] Open
Abstract
Artificial intelligence (AI) is developing rapidly and has found widespread applications in medicine, especially radiotherapy. This paper provides a brief overview of AI applications in radiotherapy, and highlights the research directions of AI that can potentially make significant impacts and relevant ongoing research works in these directions. Challenging issues related to the clinical applications of AI, such as robustness and interpretability of AI models, are also discussed. The future research directions of AI in the field of medical physics and radiotherapy are highlighted.
Collapse
Affiliation(s)
- Jiali Liu
- Department of Clinical Oncology, The University of Hong Kong-Shenzhen Hospital, Shenzhen, China
- Department of Clinical Oncology, Hong Kong University Li Ka Shing Medical School, Hong Kong, China
| | - Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jiawei Fan
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yong Yang
- Department of Radiation Oncology, Stanford University, CA, USA
| | - Peng Dong
- Department of Radiation Oncology, Stanford University, CA, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, CA, USA
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
6
|
Meng D, Boyer E, Pujades S. Vertebrae localization, segmentation and identification using a graph optimization and an anatomic consistency cycle. Comput Med Imaging Graph 2023; 107:102235. [PMID: 37130486 DOI: 10.1016/j.compmedimag.2023.102235] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 02/23/2023] [Accepted: 03/24/2023] [Indexed: 05/04/2023]
Abstract
Vertebrae localization, segmentation and identification in CT images is key to numerous clinical applications. While deep learning strategies have brought to this field significant improvements over recent years, transitional and pathological vertebrae are still plaguing most existing approaches as a consequence of their poor representation in training datasets. Alternatively, proposed non-learning based methods take benefit of prior knowledge to handle such particular cases. In this work we propose to combine both strategies. To this purpose we introduce an iterative cycle in which individual vertebrae are recurrently localized, segmented and identified using deep-networks, while anatomic consistency is enforced using statistical priors. In this strategy, the transitional vertebrae identification is handled by encoding their configurations in a graphical model that aggregates local deep-network predictions into an anatomically consistent final result. Our approach achieves the state-of-the-art results on the VerSe20 challenge benchmark, and outperforms all methods on transitional vertebrae as well as the generalization to the VerSe19 challenge benchmark. Furthermore, our method can detect and report inconsistent spine regions that do not satisfy the anatomic consistency priors. Our code and model are openly available for research purposes.1.
Collapse
Affiliation(s)
- Di Meng
- Inria, Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, France.
| | - Edmond Boyer
- Inria, Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, France
| | - Sergi Pujades
- Inria, Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, France
| |
Collapse
|
7
|
CT-Based Automatic Spine Segmentation Using Patch-Based Deep Learning. INT J INTELL SYST 2023. [DOI: 10.1155/2023/2345835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
CT vertebral segmentation plays an essential role in various clinical applications, such as computer-assisted surgical interventions, assessment of spinal abnormalities, and vertebral compression fractures. Automatic CT vertebral segmentation is challenging due to the overlapping shadows of thoracoabdominal structures such as the lungs, bony structures such as the ribs, and other issues such as ambiguous object borders, complicated spine architecture, patient variability, and fluctuations in image contrast. Deep learning is an emerging technique for disease diagnosis in the medical field. This study proposes a patch-based deep learning approach to extract the discriminative features from unlabeled data using a stacked sparse autoencoder (SSAE). 2D slices from a CT volume are divided into overlapping patches fed into the model for training. A random under sampling (RUS)-module is applied to balance the training data by selecting a subset of the majority class. SSAE uses pixel intensities alone to learn high-level features to recognize distinctive features from image patches. Each image is subjected to a sliding window operation to express image patches using autoencoder high-level features, which are then fed into a sigmoid layer to classify whether each patch is a vertebra or not. We validate our approach on three diverse publicly available datasets: VerSe, CSI-Seg, and the Lumbar CT dataset. Our proposed method outperformed other models after configuration optimization by achieving 89.9% in precision, 90.2% in recall, 98.9% in accuracy, 90.4% in F-score, 82.6% in intersection over union (IoU), and 90.2% in Dice coefficient (DC). The results of this study demonstrate that our model’s performance consistency using a variety of validation strategies is flexible, fast, and generalizable, making it suited for clinical application.
Collapse
|
8
|
Wu Z, Xia G, Zhang X, Zhou F, Ling J, Ni X, Li Y. A novel 3D lumbar vertebrae location and segmentation method based on the fusion envelope of 2D hybrid visual projection images. Comput Biol Med 2022; 151:106190. [PMID: 36306575 DOI: 10.1016/j.compbiomed.2022.106190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 09/14/2022] [Accepted: 10/08/2022] [Indexed: 12/27/2022]
Abstract
In recent years, fast and precise lumbar vertebrae segmentation technology have been one of the important topics in practical medical diagnosis and assisted medical surgery scenarios. However, most of the existing vertebral segmentation methods are based on the whole vertebral scanning space, which, up to some extent, is difficult to meet the clinical needs because of its large time complexity and space complexity. Different from the existing methods, for better exploiting the real time of lumbar segmentation, meanwhile ensuring its accuracy, a novel 3D lumbar vertebrae location and segmentation method based on the fusion envelope of 2D hybrid visual projection images (LVLS-HVPFE) is proposed in this paper. Firstly, a 2D projection location network of lumbar vertebrae based on fusion envelope of hybrid visual projection images is proposed to obtain the accurate location of each intact lumbar vertebra in the coronal and sagittal planes respectively. Among them, the envelope dataset of hybrid visual projection images (EDHVPs) is established to enhance feature representation and suppress interference in the process of dimensionality reduction projection. An envelope deep neural network (EDNN) for EDHVPs is established to effectively obtain depth envelope structure features with three different sizes, and a dimension reduction fusion mechanism is proposed to increase the sampling density of features and ensure the mutual independence of multi-scale features. Secondly, the concept of 3D localization criterion with spatial dimensionality reduction (SDRLC) is first proposed as a measure to verify the distribution consistency of vertebral targets in coronal and sagittal planes of a CT scan, and it can directionally guide for the subsequent 3D lumbar segmentation. Thirdly, under the condition of 3D positioning subspace of each intact lumbar vertebra, the 3D segmentation network based on spatial orientation guidance is used to realize an accurate segmentation of corresponding lumbar vertebra. The proposed method is evaluated with three representative datasets, and experimental results show that it is superior to the state-of-the-art methods.
Collapse
Affiliation(s)
- Zhengyang Wu
- School of Microelectronics and Communication Engineering, Chongqing University, No. 174, Zhengjie street, Shapingba District, 400044, Chongqing, China; R & D Center, Chongqing Boshikang Technology Co., Ltd., No. 78, Fenghe Road, Beibei District, 400714, Chongqing, China.
| | - Guifeng Xia
- R & D Center, Chongqing Boshikang Technology Co., Ltd., No. 78, Fenghe Road, Beibei District, 400714, Chongqing, China
| | - Xiaoheng Zhang
- R & D Center, Chongqing Boshikang Technology Co., Ltd., No. 78, Fenghe Road, Beibei District, 400714, Chongqing, China; School of Electronic Information Engineering, Chongqing Open University, No. 1, Hualong Avenue, Science Park, Jiulongpo District, 400052, Chongqing, China
| | - Fayuan Zhou
- School of Microelectronics and Communication Engineering, Chongqing University, No. 174, Zhengjie street, Shapingba District, 400044, Chongqing, China; R & D Center, Chongqing Boshikang Technology Co., Ltd., No. 78, Fenghe Road, Beibei District, 400714, Chongqing, China
| | - Jing Ling
- R & D Center, Chongqing Boshikang Technology Co., Ltd., No. 78, Fenghe Road, Beibei District, 400714, Chongqing, China
| | - Xin Ni
- R & D Center, Chongqing Boshikang Technology Co., Ltd., No. 78, Fenghe Road, Beibei District, 400714, Chongqing, China
| | - Yongming Li
- School of Microelectronics and Communication Engineering, Chongqing University, No. 174, Zhengjie street, Shapingba District, 400044, Chongqing, China.
| |
Collapse
|
9
|
Alukaev D, Kiselev S, Mustafaev T, Ainur A, Ibragimov B, Vrtovec T. A deep learning framework for vertebral morphometry and Cobb angle measurement with external validation. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2022; 31:2115-2124. [PMID: 35596800 DOI: 10.1007/s00586-022-07245-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 04/11/2022] [Accepted: 04/21/2022] [Indexed: 01/20/2023]
Abstract
PURPOSE To propose a fully automated deep learning (DL) framework for the vertebral morphometry and Cobb angle measurement from three-dimensional (3D) computed tomography (CT) images of the spine, and validate the proposed framework on an external database. METHODS The vertebrae were first localized and segmented in each 3D CT image using a DL architecture based on an ensemble of U-Nets, and then automated vertebral morphometry in the form of vertebral body (VB) and intervertebral disk (IVD) heights, and spinal curvature measurements in the form of coronal and sagittal Cobb angles (thoracic kyphosis and lumbar lordosis) were performed using dedicated machine learning techniques. The framework was trained on 1725 vertebrae from 160 CT images and validated on an external database of 157 vertebrae from 15 CT images. RESULTS The resulting mean absolute errors (± standard deviation) between the obtained DL and corresponding manual measurements were 1.17 ± 0.40 mm for VB heights, 0.54 ± 0.21 mm for IVD heights, and 3.42 ± 1.36° for coronal and sagittal Cobb angles, with respective maximal absolute errors of 2.51 mm, 1.64 mm, and 5.52°. Linear regression revealed excellent agreement, with Pearson's correlation coefficient of 0.943, 0.928, and 0.996, respectively. CONCLUSION The obtained results are within the range of values, obtained by existing DL approaches without external validation. The results therefore confirm the scalability of the proposed DL framework from the perspective of application to external data, and time and computational resource consumption required for framework training.
Collapse
Affiliation(s)
- Danis Alukaev
- AI Lab, Innopolis University, Universitetskaya St 1, 420500, Innopolis, Republic of Tatarstan, Russian Federation
| | - Semen Kiselev
- AI Lab, Innopolis University, Universitetskaya St 1, 420500, Innopolis, Republic of Tatarstan, Russian Federation
| | - Tamerlan Mustafaev
- AI Lab, Innopolis University, Universitetskaya St 1, 420500, Innopolis, Republic of Tatarstan, Russian Federation.,Kazan Public Hospital, Chekhova 1A, 42000, Kazan, Republic of Tatarstan, Russian Federation
| | - Ahatov Ainur
- Barsmed Diagnostic Center, Daurskaya 12, 42000, Kazan, Republic of Tatarstan, Russian Federation
| | - Bulat Ibragimov
- Department of Computer Science, University of Copenhagen, Universitetsparken 1, 2100, Copenhagen, Denmark.,Laboratory of Imaging Technologies, Faculty of Electrical Engineering, University of Ljubljana, Tržaška cesta 25, 1000, Ljubljana, Slovenia
| | - Tomaž Vrtovec
- Laboratory of Imaging Technologies, Faculty of Electrical Engineering, University of Ljubljana, Tržaška cesta 25, 1000, Ljubljana, Slovenia.
| |
Collapse
|
10
|
Zhu H, Yao Q, Xiao L, Zhou SK. Learning to Localize Cross-Anatomy Landmarks in X-Ray Images with a Universal Model. BME FRONTIERS 2022; 2022:9765095. [PMID: 37850187 PMCID: PMC10521670 DOI: 10.34133/2022/9765095] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 05/04/2022] [Indexed: 10/19/2023] Open
Abstract
Objective and Impact Statement. In this work, we develop a universal anatomical landmark detection model which learns once from multiple datasets corresponding to different anatomical regions. Compared with the conventional model trained on a single dataset, this universal model not only is more light weighted and easier to train but also improves the accuracy of the anatomical landmark location. Introduction. The accurate and automatic localization of anatomical landmarks plays an essential role in medical image analysis. However, recent deep learning-based methods only utilize limited data from a single dataset. It is promising and desirable to build a model learned from different regions which harnesses the power of big data. Methods. Our model consists of a local network and a global network, which capture local features and global features, respectively. The local network is a fully convolutional network built up with depth-wise separable convolutions, and the global network uses dilated convolution to enlarge the receptive field to model global dependencies. Results. We evaluate our model on four 2D X-ray image datasets totaling 1710 images and 72 landmarks in four anatomical regions. Extensive experimental results show that our model improves the detection accuracy compared to the state-of-the-art methods. Conclusion. Our model makes the first attempt to train a single network on multiple datasets for landmark detection. Experimental results qualitatively and quantitatively show that our proposed model performs better than other models trained on multiple datasets and even better than models trained on a single dataset separately.
Collapse
Affiliation(s)
- Heqin Zhu
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Qingsong Yao
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Li Xiao
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - S. Kevin Zhou
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
- Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou 215123, China
| |
Collapse
|
11
|
Automated segmentation of the fractured vertebrae on CT and its applicability in a radiomics model to predict fracture malignancy. Sci Rep 2022; 12:6735. [PMID: 35468985 PMCID: PMC9038736 DOI: 10.1038/s41598-022-10807-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 04/13/2022] [Indexed: 11/08/2022] Open
Abstract
Although CT radiomics has shown promising results in the evaluation of vertebral fractures, the need for manual segmentation of fractured vertebrae limited the routine clinical implementation of radiomics. Therefore, automated segmentation of fractured vertebrae is needed for successful clinical use of radiomics. In this study, we aimed to develop and validate an automated algorithm for segmentation of fractured vertebral bodies on CT, and to evaluate the applicability of the algorithm in a radiomics prediction model to differentiate benign and malignant fractures. A convolutional neural network was trained to perform automated segmentation of fractured vertebral bodies using 341 vertebrae with benign or malignant fractures from 158 patients, and was validated on independent test sets (internal test, 86 vertebrae [59 patients]; external test, 102 vertebrae [59 patients]). Then, a radiomics model predicting fracture malignancy on CT was constructed, and the prediction performance was compared between automated and human expert segmentations. The algorithm achieved good agreement with human expert segmentation at testing (Dice similarity coefficient, 0.93-0.94; cross-sectional area error, 2.66-2.97%; average surface distance, 0.40-0.54 mm). The radiomics model demonstrated good performance in the training set (AUC, 0.93). In the test sets, automated and human expert segmentations showed comparable prediction performances (AUC, internal test, 0.80 vs 0.87, p = 0.044; external test, 0.83 vs 0.80, p = 0.37). In summary, we developed and validated an automated segmentation algorithm that showed comparable performance to human expert segmentation in a CT radiomics model to predict fracture malignancy, which may enable more practical clinical utilization of radiomics.
Collapse
|
12
|
Zhao W, Shen L, Islam MT, Qin W, Zhang Z, Liang X, Zhang G, Xu S, Li X. Artificial intelligence in image-guided radiotherapy: a review of treatment target localization. Quant Imaging Med Surg 2021; 11:4881-4894. [PMID: 34888196 PMCID: PMC8611462 DOI: 10.21037/qims-21-199] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 07/05/2021] [Indexed: 01/06/2023]
Abstract
Modern conformal beam delivery techniques require image-guidance to ensure the prescribed dose to be delivered as planned. Recent advances in artificial intelligence (AI) have greatly augmented our ability to accurately localize the treatment target while sparing the normal tissues. In this paper, we review the applications of AI-based algorithms in image-guided radiotherapy (IGRT), and discuss the indications of these applications to the future of clinical practice of radiotherapy. The benefits, limitations and some important trends in research and development of the AI-based IGRT techniques are also discussed. AI-based IGRT techniques have the potential to monitor tumor motion, reduce treatment uncertainty and improve treatment precision. Particularly, these techniques also allow more healthy tissue to be spared while keeping tumor coverage the same or even better.
Collapse
Affiliation(s)
- Wei Zhao
- School of Physics, Beihang University, Beijing, China
| | - Liyue Shen
- Department of Radiation Oncology, Stanford University, Stanford, USA
| | - Md Tauhidul Islam
- Department of Radiation Oncology, Stanford University, Stanford, USA
| | - Wenjian Qin
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, USA
| | - Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Gaolong Zhang
- School of Physics, Beihang University, Beijing, China
| | - Shouping Xu
- Department of Radiation Oncology, PLA General Hospital, Beijing, China
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong, China
| |
Collapse
|
13
|
D’Antoni F, Russo F, Ambrosio L, Vollero L, Vadalà G, Merone M, Papalia R, Denaro V. Artificial Intelligence and Computer Vision in Low Back Pain: A Systematic Review. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph182010909. [PMID: 34682647 PMCID: PMC8535895 DOI: 10.3390/ijerph182010909] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 10/04/2021] [Accepted: 10/09/2021] [Indexed: 12/16/2022]
Abstract
Chronic Low Back Pain (LBP) is a symptom that may be caused by several diseases, and it is currently the leading cause of disability worldwide. The increased amount of digital images in orthopaedics has led to the development of methods related to artificial intelligence, and to computer vision in particular, which aim to improve diagnosis and treatment of LBP. In this manuscript, we have systematically reviewed the available literature on the use of computer vision in the diagnosis and treatment of LBP. A systematic research of PubMed electronic database was performed. The search strategy was set as the combinations of the following keywords: "Artificial Intelligence", "Feature Extraction", "Segmentation", "Computer Vision", "Machine Learning", "Deep Learning", "Neural Network", "Low Back Pain", "Lumbar". Results: The search returned a total of 558 articles. After careful evaluation of the abstracts, 358 were excluded, whereas 124 papers were excluded after full-text examination, taking the number of eligible articles to 76. The main applications of computer vision in LBP include feature extraction and segmentation, which are usually followed by further tasks. Most recent methods use deep learning models rather than digital image processing techniques. The best performing methods for segmentation of vertebrae, intervertebral discs, spinal canal and lumbar muscles achieve Sørensen-Dice scores greater than 90%, whereas studies focusing on localization and identification of structures collectively showed an accuracy greater than 80%. Future advances in artificial intelligence are expected to increase systems' autonomy and reliability, thus providing even more effective tools for the diagnosis and treatment of LBP.
Collapse
Affiliation(s)
- Federico D’Antoni
- Unit of Computer Systems and Bioinformatics, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 21, 00128 Rome, Italy; (F.D.); (L.V.)
| | - Fabrizio Russo
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
- Correspondence: (F.R.); (M.M.)
| | - Luca Ambrosio
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| | - Luca Vollero
- Unit of Computer Systems and Bioinformatics, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 21, 00128 Rome, Italy; (F.D.); (L.V.)
| | - Gianluca Vadalà
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| | - Mario Merone
- Unit of Computer Systems and Bioinformatics, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 21, 00128 Rome, Italy; (F.D.); (L.V.)
- Correspondence: (F.R.); (M.M.)
| | - Rocco Papalia
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| | - Vincenzo Denaro
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| |
Collapse
|
14
|
Li Q, Du Z, Yu H. Precise laminae segmentation based on neural network for robot-assisted decompressive laminectomy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 209:106333. [PMID: 34391999 DOI: 10.1016/j.cmpb.2021.106333] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 07/29/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE The decompressive laminectomy is one of the most common operations to treat lumbar spinal stenosis by removing the laminae above the spinal nerve. Recently, an increasing number of robots are deployed during the surgical process to reduce the burden on surgeons and to reduce complications. However, for the robot-assisted decompressive laminectomy, an accurate 3D model of laminae from a CT image is highly desired. The purpose of this paper is to precisely segment the laminae with fewer calculations. METHODS We propose a two-stage neural network SegRe-Net. In the first stage, the entire intraoperative CT image is inputted to acquire the coarse segmentation of vertebrae with low resolution and the probability map of the laminar centers. The second stage is trained to refine the segmentation of laminae. RESULTS Three public available datasets were used to train and validate the models. The experimental results demonstrated the effectiveness of the proposed network on laminar segmentation with an average Dice coefficient of 96.38% and an average symmetric surface distance of 0.097 mm. CONCLUSION The proposed two-stage network can achieve better results than those baseline models in the laminae segmentation task with less calculation amount and learnable parameters. Our methods improve the accuracy of laminar models and reduce the image processing time. It can be used to provide a more precise planning trajectory and may promote the clinical application for the robot-assisted decompression laminectomy surgery.
Collapse
Affiliation(s)
- Qian Li
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Zhijiang Du
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China.
| | - Hongjian Yu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China.
| |
Collapse
|
15
|
A Region-Based Deep Level Set Formulation for Vertebral Bone Segmentation of Osteoporotic Fractures. J Digit Imaging 2021; 33:191-203. [PMID: 31011954 DOI: 10.1007/s10278-019-00216-0] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023] Open
Abstract
Accurate segmentation of the vertebrae from medical images plays an important role in computer-aided diagnoses (CADs). It provides an initial and early diagnosis of various vertebral abnormalities to doctors and radiologists. Vertebrae segmentation is very important but difficult task in medical imaging due to low-contrast imaging and noise. It becomes more challenging when dealing with fractured (osteoporotic) cases. This work is dedicated to address the challenging problem of vertebra segmentation. In the past, various segmentation techniques of vertebrae have been proposed. Recently, deep learning techniques have been introduced in biomedical image processing for segmentation and characterization of several abnormalities. These techniques are becoming popular for segmentation purposes due to their robustness and accuracy. In this paper, we present a novel combination of traditional region-based level set with deep learning framework in order to predict shape of vertebral bones accurately; thus, it would be able to handle the fractured cases efficiently. We termed this novel Framework as "FU-Net" which is a powerful and practical framework to handle fractured vertebrae segmentation efficiently. The proposed method was successfully evaluated on two different challenging datasets: (1) 20 CT scans, 15 healthy cases, and 5 fractured cases provided at spine segmentation challenge CSI 2014; (2) 25 CT image data (both healthy and fractured cases) provided at spine segmentation challenge CSI 2016 or xVertSeg.v1 challenge. We have achieved promising results on our proposed technique especially on fractured cases. Dice score was found to be 96.4 ± 0.8% without fractured cases and 92.8 ± 1.9% with fractured cases in CSI 2014 dataset (lumber and thoracic). Similarly, dice score was 95.2 ± 1.9% on 15 CT dataset (with given ground truths) and 95.4 ± 2.1% on total 25 CT dataset for CSI 2016 datasets (with 10 annotated CT datasets). The proposed technique outperformed other state-of-the-art techniques and handled the fractured cases for the first time efficiently.
Collapse
|
16
|
Sekuboyina A, Husseini ME, Bayat A, Löffler M, Liebl H, Li H, Tetteh G, Kukačka J, Payer C, Štern D, Urschler M, Chen M, Cheng D, Lessmann N, Hu Y, Wang T, Yang D, Xu D, Ambellan F, Amiranashvili T, Ehlke M, Lamecker H, Lehnert S, Lirio M, Olaguer NPD, Ramm H, Sahu M, Tack A, Zachow S, Jiang T, Ma X, Angerman C, Wang X, Brown K, Kirszenberg A, Puybareau É, Chen D, Bai Y, Rapazzo BH, Yeah T, Zhang A, Xu S, Hou F, He Z, Zeng C, Xiangshang Z, Liming X, Netherton TJ, Mumme RP, Court LE, Huang Z, He C, Wang LW, Ling SH, Huỳnh LD, Boutry N, Jakubicek R, Chmelik J, Mulay S, Sivaprakasam M, Paetzold JC, Shit S, Ezhov I, Wiestler B, Glocker B, Valentinitsch A, Rempfler M, Menze BH, Kirschke JS. VerSe: A Vertebrae labelling and segmentation benchmark for multi-detector CT images. Med Image Anal 2021; 73:102166. [PMID: 34340104 DOI: 10.1016/j.media.2021.102166] [Citation(s) in RCA: 113] [Impact Index Per Article: 28.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 06/25/2021] [Accepted: 07/06/2021] [Indexed: 11/25/2022]
Abstract
Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosis, surgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VerSe content and code can be accessed at: https://github.com/anjany/verse.
Collapse
Affiliation(s)
- Anjany Sekuboyina
- Department of Informatics, Technical University of Munich, Germany; Munich School of BioEngineering, Technical University of Munich, Germany; Department of Neuroradiology, Klinikum Rechts der Isar, Germany.
| | - Malek E Husseini
- Department of Informatics, Technical University of Munich, Germany; Department of Neuroradiology, Klinikum Rechts der Isar, Germany
| | - Amirhossein Bayat
- Department of Informatics, Technical University of Munich, Germany; Department of Neuroradiology, Klinikum Rechts der Isar, Germany
| | | | - Hans Liebl
- Department of Neuroradiology, Klinikum Rechts der Isar, Germany
| | - Hongwei Li
- Department of Informatics, Technical University of Munich, Germany
| | - Giles Tetteh
- Department of Informatics, Technical University of Munich, Germany
| | - Jan Kukačka
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Germany
| | - Christian Payer
- Institute of Computer Graphics and Vision, Graz University of Technology, Austria
| | - Darko Štern
- Gottfried Schatz Research Center: Biophysics, Medical University of Graz, Austria
| | - Martin Urschler
- School of Computer Science, The University of Auckland, New Zealand
| | - Maodong Chen
- Computer Vision Group, iFLYTEK Research South China, China
| | - Dalong Cheng
- Computer Vision Group, iFLYTEK Research South China, China
| | - Nikolas Lessmann
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center Nijmegen, The Netherlands
| | - Yujin Hu
- Shenzhen Research Institute of Big Data, China
| | - Tianfu Wang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Xin Wang
- Department of Electronic Engineering, Fudan University, China; Department of Radiology, University of North Carolina at Chapel Hill, USA
| | | | | | | | | | | | | | | | | | | | - Feng Hou
- Institute of Computing Technology, Chinese Academy of Sciences, China
| | | | | | - Zheng Xiangshang
- College of Computer Science and Technology, Zhejiang University, China; Real Doctor AI Research Centre, Zhejiang University, China
| | - Xu Liming
- College of Computer Science and Technology, Zhejiang University, China
| | | | | | | | - Zixun Huang
- Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, China
| | - Chenhang He
- Department of Computing, The Hong Kong Polytechnic University, China
| | - Li-Wen Wang
- Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, China
| | - Sai Ho Ling
- The School of Biomedical Engineering, University of Technology Sydney, Australia
| | - Lê Duy Huỳnh
- EPITA Research and Development Laboratory (LRDE), France
| | - Nicolas Boutry
- EPITA Research and Development Laboratory (LRDE), France
| | - Roman Jakubicek
- Department of Biomedical Engineering, Brno University of Technology, Czech Republic
| | - Jiri Chmelik
- Department of Biomedical Engineering, Brno University of Technology, Czech Republic
| | - Supriti Mulay
- Indian Institute of Technology Madras, India; Healthcare Technology Innovation Centre, India
| | | | | | - Suprosanna Shit
- Department of Informatics, Technical University of Munich, Germany
| | - Ivan Ezhov
- Department of Informatics, Technical University of Munich, Germany
| | | | - Ben Glocker
- Department of Computing, Imperial College London, UK
| | | | - Markus Rempfler
- Friedrich Miescher Institute for Biomedical Engineering, Switzerland
| | - Björn H Menze
- Department of Informatics, Technical University of Munich, Germany; Department for Quantitative Biomedicine, University of Zurich, Switzerland
| | - Jan S Kirschke
- Department of Neuroradiology, Klinikum Rechts der Isar, Germany
| |
Collapse
|
17
|
Coates JTT, Pirovano G, El Naqa I. Radiomic and radiogenomic modeling for radiotherapy: strategies, pitfalls, and challenges. J Med Imaging (Bellingham) 2021; 8:031902. [PMID: 33768134 PMCID: PMC7985651 DOI: 10.1117/1.jmi.8.3.031902] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 01/12/2021] [Indexed: 12/14/2022] Open
Abstract
The power of predictive modeling for radiotherapy outcomes has historically been limited by an inability to adequately capture patient-specific variabilities; however, next-generation platforms together with imaging technologies and powerful bioinformatic tools have facilitated strategies and provided optimism. Integrating clinical, biological, imaging, and treatment-specific data for more accurate prediction of tumor control probabilities or risk of radiation-induced side effects are high-dimensional problems whose solutions could have widespread benefits to a diverse patient population-we discuss technical approaches toward this objective. Increasing interest in the above is specifically reflected by the emergence of two nascent fields, which are distinct but complementary: radiogenomics, which broadly seeks to integrate biological risk factors together with treatment and diagnostic information to generate individualized patient risk profiles, and radiomics, which further leverages large-scale imaging correlates and extracted features for the same purpose. We review classical analytical and data-driven approaches for outcomes prediction that serve as antecedents to both radiomic and radiogenomic strategies. Discussion then focuses on uses of conventional and deep machine learning in radiomics. We further consider promising strategies for the harmonization of high-dimensional, heterogeneous multiomics datasets (panomics) and techniques for nonparametric validation of best-fit models. Strategies to overcome common pitfalls that are unique to data-intensive radiomics are also discussed.
Collapse
Affiliation(s)
- James T. T. Coates
- Massachusetts General Hospital & Harvard Medical School, Center for Cancer Research, Boston, Massachusetts, United States
| | - Giacomo Pirovano
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, New York, United States
| | - Issam El Naqa
- Moffitt Cancer Center and Research Institute, Department of Machine Learning, Tampa, Florida, United States
| |
Collapse
|
18
|
Kim KC, Cho HC, Jang TJ, Choi JM, Seo JK. Automatic detection and segmentation of lumbar vertebrae from X-ray images for compression fracture evaluation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105833. [PMID: 33250283 DOI: 10.1016/j.cmpb.2020.105833] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Accepted: 11/04/2020] [Indexed: 06/12/2023]
Abstract
For compression fracture detection and evaluation, an automatic X-ray image segmentation technique that combines deep-learning and level-set methods is proposed. Automatic segmentation is much more difficult for X-ray images than for CT or MRI images because they contain overlapping shadows of thoracoabdominal structures including lungs, bowel gases, and other bony structures such as ribs. Additional difficulties include unclear object boundaries, the complex shape of the vertebra, inter-patient variability, and variations in image contrast. Accordingly, a structured hierarchical segmentation method is presented that combines the advantages of two deep-learning methods. Pose-driven learning is used to selectively identify the five lumbar vertebrae in an accurate and robust manner. With knowledge of the vertebral positions, M-net is employed to segment the individual vertebra. Finally, fine-tuning segmentation is applied by combining the level-set method with the previously obtained segmentation results. The performance of the proposed method was validated by 160 lumbar X-ray images, resulting in a mean Dice similarity metric of 91.60±2.22%. The results show that the proposed method achieves accurate and robust identification of each lumbar vertebra and fine segmentation of individual vertebra.
Collapse
Affiliation(s)
- Kang Cheol Kim
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| | - Hyun Cheol Cho
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| | - Tae Jun Jang
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| | | | - Jin Keun Seo
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, South Korea
| |
Collapse
|
19
|
Li Q, Du Z, Yu H. Grinding trajectory generator in robot-assisted laminectomy surgery. Int J Comput Assist Radiol Surg 2021; 16:485-494. [PMID: 33507483 DOI: 10.1007/s11548-021-02316-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 01/15/2021] [Indexed: 10/22/2022]
Abstract
PURPOSE Grinding trajectory planning for robot-assisted laminectomy is a complicated and cumbersome task. The purpose of this research is to automatically obtain the surgical target area from the CT image, and based on this, formulate a reasonable robotic grinding trajectory. METHODS We propose a deep neural network for laminae positioning, a trajectory generation strategy, and a grinding speed adjusting strategy. These algorithms can obtain surgical information from CT images and automatically complete grinding trajectory planning. RESULTS The proposed laminae positioning network can reach a recognition accuracy of 95.7%, and the positioning error is only 1.12 mm in the desired direction. The simulated surgical planning on the public dataset has achieved the expected results. In a set of comparative robotic grinding experiments, those using the speed adjustment algorithm obtained a smoother grinding force. CONCLUSION Our work can automatically extract laminar centers from the CT image precisely to formulate a reasonable surgical trajectory plan. It simplifies the surgical planning process and reduces the time needed for surgeons to perform such a cumbersome operation manually.
Collapse
Affiliation(s)
- Qian Li
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Zhijiang Du
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Hongjian Yu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China.
| |
Collapse
|
20
|
Caprara S, Carrillo F, Snedeker JG, Farshad M, Senteler M. Automated Pipeline to Generate Anatomically Accurate Patient-Specific Biomechanical Models of Healthy and Pathological FSUs. Front Bioeng Biotechnol 2021; 9:636953. [PMID: 33585436 PMCID: PMC7876284 DOI: 10.3389/fbioe.2021.636953] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 01/11/2021] [Indexed: 12/29/2022] Open
Abstract
State-of-the-art preoperative biomechanical analysis for the planning of spinal surgery not only requires the generation of three-dimensional patient-specific models but also the accurate biomechanical representation of vertebral joints. The benefits offered by computational models suitable for such purposes are still outweighed by the time and effort required for their generation, thus compromising their applicability in a clinical environment. In this work, we aim to ease the integration of computerized methods into patient-specific planning of spinal surgery. We present the first pipeline combining deep learning and finite element methods that allows a completely automated model generation of functional spine units (FSUs) of the lumbar spine for patient-specific FE simulations (FEBio). The pipeline consists of three steps: (a) multiclass segmentation of cropped 3D CT images containing lumbar vertebrae using the DenseVNet network, (b) automatic landmark-based mesh fitting of statistical shape models onto 3D semantic segmented meshes of the vertebral models, and (c) automatic generation of patient-specific FE models of lumbar segments for the simulation of flexion-extension, lateral bending, and axial rotation movements. The automatic segmentation of FSUs was evaluated against the gold standard (manual segmentation) using 10-fold cross-validation. The obtained Dice coefficient was 93.7% on average, with a mean surface distance of 0.88 mm and a mean Hausdorff distance of 11.16 mm (N = 150). Automatic generation of finite element models to simulate the range of motion (ROM) was successfully performed for five healthy and five pathological FSUs. The results of the simulations were evaluated against the literature and showed comparable ROMs in both healthy and pathological cases, including the alteration of ROM typically observed in severely degenerated FSUs. The major intent of this work is to automate the creation of anatomically accurate patient-specific models by a single pipeline allowing functional modeling of spinal motion in healthy and pathological FSUs. Our approach reduces manual efforts to a minimum and the execution of the entire pipeline including simulations takes approximately 2 h. The automation, time-efficiency and robustness level of the pipeline represents a first step toward its clinical integration.
Collapse
Affiliation(s)
- Sebastiano Caprara
- Department of Orthopedics, University Hospital Balgrist, University of Zurich, Zurich, Switzerland
- Institute for Biomechanics, Swiss Federal Institute of Technology (ETH), Zurich, Switzerland
| | - Fabio Carrillo
- Institute for Biomechanics, Swiss Federal Institute of Technology (ETH), Zurich, Switzerland
- Research in Orthopedic Computer Science, University Hospital Balgrist, Zurich, Switzerland
| | - Jess G. Snedeker
- Department of Orthopedics, University Hospital Balgrist, University of Zurich, Zurich, Switzerland
- Institute for Biomechanics, Swiss Federal Institute of Technology (ETH), Zurich, Switzerland
| | - Mazda Farshad
- Department of Orthopedics, University Hospital Balgrist, University of Zurich, Zurich, Switzerland
| | - Marco Senteler
- Department of Orthopedics, University Hospital Balgrist, University of Zurich, Zurich, Switzerland
- Institute for Biomechanics, Swiss Federal Institute of Technology (ETH), Zurich, Switzerland
| |
Collapse
|
21
|
Kitahama Y, Shizuka H, Kimura R, Suzuki T, Ohara Y, Miyake H, Sakai K. Fluid Lubrication and Cooling Effects in Diamond Grinding of Human Iliac Bone. ACTA ACUST UNITED AC 2021; 57:medicina57010071. [PMID: 33466923 PMCID: PMC7830225 DOI: 10.3390/medicina57010071] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 01/03/2021] [Accepted: 01/07/2021] [Indexed: 11/16/2022]
Abstract
Background and Objectives: Although there have been research on bone cutting, there have been few research on bone grinding. This study reports the measurement results of the experimental system that simulated partial laminectomy in microscopic spine surgery. The purpose of this study was to examine the fluid lubrication and cooling in bone grinding, histological characteristics of workpieces, and differences in grinding between manual and milling machines. Materials and Methods: Thiel-fixed human iliac bones were used as workpieces. A neurosurgical microdrill was used as a drill system. The workpieces were fixed to a 4-component piezo-electric dynamometer and fixtures, which was used to measure the triaxial power during bone grinding. Grinding tasks were performed by manual activity and a small milling machine with or without water. Results: In bone grinding with 4-mm diameter diamond burs and water, reduction in the number of sudden increases in grinding resistance and cooling effect of over 100 °C were confirmed. Conclusion: Manual grinding may enable the control of the grinding speed and cutting depth while giving top priority to uniform torque on the work piece applied by tools. Observing the drill tip using a triaxial dynamometer in the quantification of surgery may provide useful data for the development of safety mechanisms to prevent a sudden deviation of the drill tip.
Collapse
Affiliation(s)
- Yoshihiro Kitahama
- Spine Center, Omaezaki Municipal Hospital, Shizuoka 437-1696, Japan;
- Medical Photonics Research Center, Hamamatsu University School of Medicine, Hamamatsu 431-3192, Japan;
- Correspondence:
| | - Hiroo Shizuka
- Department of Mechanical Engineering, Faculty of Engineering, Shizuoka University, Hamamatsu 422-8529, Japan; (H.S.); (R.K.); (K.S.)
| | - Ritsu Kimura
- Department of Mechanical Engineering, Faculty of Engineering, Shizuoka University, Hamamatsu 422-8529, Japan; (H.S.); (R.K.); (K.S.)
| | - Tomo Suzuki
- Spine Center, Omaezaki Municipal Hospital, Shizuoka 437-1696, Japan;
| | - Yukoh Ohara
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo 113-8421, Japan;
| | - Hideaki Miyake
- Medical Photonics Research Center, Hamamatsu University School of Medicine, Hamamatsu 431-3192, Japan;
| | - Katsuhiko Sakai
- Department of Mechanical Engineering, Faculty of Engineering, Shizuoka University, Hamamatsu 422-8529, Japan; (H.S.); (R.K.); (K.S.)
| |
Collapse
|
22
|
Pang S, Pang C, Zhao L, Chen Y, Su Z, Zhou Y, Huang M, Yang W, Lu H, Feng Q. SpineParseNet: Spine Parsing for Volumetric MR Image by a Two-Stage Segmentation Framework With Semantic Image Representation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:262-273. [PMID: 32956047 DOI: 10.1109/tmi.2020.3025087] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Spine parsing (i.e., multi-class segmentation of vertebrae and intervertebral discs (IVDs)) for volumetric magnetic resonance (MR) image plays a significant role in various spinal disease diagnoses and treatments of spine disorders, yet is still a challenge due to the inter-class similarity and intra-class variation of spine images. Existing fully convolutional network based methods failed to explicitly exploit the dependencies between different spinal structures. In this article, we propose a novel two-stage framework named SpineParseNet to achieve automated spine parsing for volumetric MR images. The SpineParseNet consists of a 3D graph convolutional segmentation network (GCSN) for 3D coarse segmentation and a 2D residual U-Net (ResUNet) for 2D segmentation refinement. In 3D GCSN, region pooling is employed to project the image representation to graph representation, in which each node representation denotes a specific spinal structure. The adjacency matrix of the graph is designed according to the connection of spinal structures. The graph representation is evolved by graph convolutions. Subsequently, the proposed region unpooling module re-projects the evolved graph representation to a semantic image representation, which facilitates the 3D GCSN to generate reliable coarse segmentation. Finally, the 2D ResUNet refines the segmentation. Experiments on T2-weighted volumetric MR images of 215 subjects show that SpineParseNet achieves impressive performance with mean Dice similarity coefficients of 87.32 ± 4.75%, 87.78 ± 4.64%, and 87.49 ± 3.81% for the segmentations of 10 vertebrae, 9 IVDs, and all 19 spinal structures respectively. The proposed method has great potential in clinical spinal disease diagnoses and treatments.
Collapse
|
23
|
Liang X, Zhao W, Hristov DH, Buyyounouski MK, Hancock SL, Bagshaw H, Zhang Q, Xie Y, Xing L. A deep learning framework for prostate localization in cone beam CT-guided radiotherapy. Med Phys 2020; 47:4233-4240. [PMID: 32583418 PMCID: PMC10823910 DOI: 10.1002/mp.14355] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Revised: 06/11/2020] [Accepted: 06/17/2020] [Indexed: 01/31/2024] Open
Abstract
PURPOSE To develop a deep learning-based model for prostate planning target volume (PTV) localization on cone beam computed tomography (CBCT) to improve the workflow of CBCT-guided patient setup. METHODS A two-step task-based residual network (T2 RN) is proposed to automatically identify inherent landmarks in prostate PTV. The input to the T2 RN is the pretreatment CBCT images of the patient, and the output is the deep learning-identified landmarks in the PTV. To ensure robust PTV localization, the T2 RN model is trained by using over thousand sets of CT images with labeled landmarks, each of the CTs corresponds to a different scenario of patient position and/or anatomy distribution generated by synthetically changing the planning CT (pCT) image. The changes, including translation, rotation, and deformation, represent vast possible clinical situations of anatomy variations during a course of radiation therapy (RT). The trained patient-specific T2 RN model is tested by using 240 CBCTs from six patients. The testing CBCTs consists of 120 original CBCTs and 120 synthetic CBCTs. The synthetic CBCTs are generated by applying rotation/translation transformations to each of the original CBCT. RESULTS The systematic/random setup errors between the model prediction and the reference are found to be <0.25/2.46 mm and 0.14/1.41° in translation and rotation dimensions, respectively. Pearson's correlation coefficient between model prediction and the reference is higher than 0.94 in translation and rotation dimensions. The Bland-Altman plots show good agreement between the two techniques. CONCLUSIONS A novel T2 RN deep learning technique is established to localize the prostate PTV for RT patient setup. Our results show that highly accurate marker-less prostate setup is achievable by leveraging the state-of-the-art deep learning strategy.
Collapse
Affiliation(s)
- Xiaokun Liang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055 China
| | - Wei Zhao
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Dimitre H. Hristov
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | | | - Steven L. Hancock
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Hilary Bagshaw
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Qin Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055 China
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| |
Collapse
|
24
|
Liu H, Wang H, Wu Y, Xing L. Superpixel Region Merging Based on Deep Network for Medical Image Segmentation. ACM T INTEL SYST TEC 2020. [DOI: 10.1145/3386090] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Automatic and accurate semantic segmentation of pathological structures in medical images is challenging because of noisy disturbance, deformable shapes of pathology, and low contrast between soft tissues. Classical superpixel-based classification algorithms suffer from edge leakage due to complexity and heterogeneity inherent in medical images. Therefore, we propose a deep U-Net with superpixel region merging processing incorporated for edge enhancement to facilitate and optimize segmentation. Our approach combines three innovations: (1) different from deep learning--based image segmentation, the segmentation evolved from superpixel region merging via U-Net training getting rich semantic information, in addition to gray similarity; (2) a bilateral filtering module was adopted at the beginning of the network to eliminate external noise and enhance soft tissue contrast at edges of pathogy; and (3) a normalization layer was inserted after the convolutional layer at each feature scale, to prevent overfitting and increase the sensitivity to model parameters. This model was validated on lung CT, brain MR, and coronary CT datasets, respectively. Different superpixel methods and cross validation show the effectiveness of this architecture. The hyperparameter settings were empirically explored to achieve a good trade-off between the performance and efficiency, where a four-layer network achieves the best result in precision, recall, F-measure, and running speed. It was demonstrated that our method outperformed state-of-the-art networks, including FCN-16s, SegNet, PSPNet, DeepLabv3, and traditional U-Net, both quantitatively and qualitatively. Source code for the complete method is available at https://github.com/Leahnawho/Superpixel-network.
Collapse
Affiliation(s)
- Hui Liu
- Shandong University of Finance and Economics and Stanford University, Jinan, Shandong Province, China
| | - Haiou Wang
- Shandong University of Finance and Economics, Jinan, Shandong Province, China
| | - Yan Wu
- Stanford University, CA, USA
| | | |
Collapse
|
25
|
Li Q, Du Z, Yu H. Trajectory planning for robot-assisted laminectomy decompression based on CT images. ACTA ACUST UNITED AC 2020. [DOI: 10.1088/1757-899x/768/4/042037] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
26
|
Gurav SB, Kulhalli KV, Desai VV. PROSTATE CANCER DETECTION USING HISTOPATHOLOGY IMAGES AND CLASSIFICATION USING IMPROVED RideNN. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2019. [DOI: 10.4015/s101623721950042x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Medical industry reports prostate cancer as common and high among men and alarms the necessity for detecting prostate cancer for which the required morphology is extracted from the histopathology images. Commonly, the Gleason grading system remains a perfect factor for grading prostate cancer in men, but pathologists suffer from minute inter- and intra-observer variations. Thus, an automatic method for segmenting and classifying prostate cancer is modeled in this paper. The significance of the developed method is that the segmentation and classification are gland-oriented using the Color Space (CS) transformation and Salp Swarm Optimization Algorithm-based Rider Neural Network (SSA-RideNN). The gland region is considered as the morphology for cancer detection from which the maximal significant regions are extracted as features using multiple-kernel scale-invariant feature transform (MK-SIFT). Here, the RideNN classifier is trained optimally using the proposed Salp–Rider Algorithm (SRA), which is the integration of Salp Swarm Optimization Algorithm (SSA) and Rider Optimization Algorithm (ROA). The experimentation is performed using the histopathology images and the analysis based on sensitivity, accuracy, and specificity reveals that the proposed prostate cancer detection method acquired the maximal accuracy, sensitivity, and specificity of 0.8966, 0.8919, and 0.8596, respectively.
Collapse
Affiliation(s)
- Shashidhar B. Gurav
- Sharad Institute of Technology, College of Engineering, Ichal Karanji, Kolhapur 416121, Maharashtra, India
| | - Kshama V. Kulhalli
- D Y Patil College of Engineering and Technology, Kasaba Bawada, Kolhapur 416006, Maharashtra, India
| | - Veena V. Desai
- Department of Computer Science and Engineering, KLS Gogte Institute of Technology, Udyambag, Belagavi 590008, Karnataka, India
| |
Collapse
|
27
|
Lessmann N, van Ginneken B, de Jong PA, Išgum I. Iterative fully convolutional neural networks for automatic vertebra segmentation and identification. Med Image Anal 2019; 53:142-155. [PMID: 30771712 DOI: 10.1016/j.media.2019.02.005] [Citation(s) in RCA: 120] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Revised: 01/19/2019] [Accepted: 02/11/2019] [Indexed: 12/28/2022]
Abstract
Precise segmentation and anatomical identification of the vertebrae provides the basis for automatic analysis of the spine, such as detection of vertebral compression fractures or other abnormalities. Most dedicated spine CT and MR scans as well as scans of the chest, abdomen or neck cover only part of the spine. Segmentation and identification should therefore not rely on the visibility of certain vertebrae or a certain number of vertebrae. We propose an iterative instance segmentation approach that uses a fully convolutional neural network to segment and label vertebrae one after the other, independently of the number of visible vertebrae. This instance-by-instance segmentation is enabled by combining the network with a memory component that retains information about already segmented vertebrae. The network iteratively analyzes image patches, using information from both image and memory to search for the next vertebra. To efficiently traverse the image, we include the prior knowledge that the vertebrae are always located next to each other, which is used to follow the vertebral column. The network concurrently performs multiple tasks, which are segmentation of a vertebra, regression of its anatomical label and prediction whether the vertebra is completely visible in the image, which allows to exclude incompletely visible vertebrae from further analyses. The predicted anatomical labels of the individual vertebrae are additionally refined with a maximum likelihood approach, choosing the overall most likely labeling if all detected vertebrae are taken into account. This method was evaluated with five diverse datasets, including multiple modalities (CT and MR), various fields of view and coverages of different sections of the spine, and a particularly challenging set of low-dose chest CT scans. For vertebra segmentation, the average Dice score was 94.9 ± 2.1% with an average absolute symmetric surface distance of 0.2 ± 10.1mm. The anatomical identification had an accuracy of 93%, corresponding to a single case with mislabeled vertebrae. Vertebrae were classified as completely or incompletely visible with an accuracy of 97%. The proposed iterative segmentation method compares favorably with state-of-the-art methods and is fast, flexible and generalizable.
Collapse
Affiliation(s)
- Nikolas Lessmann
- Image Sciences Institute, University Medical Center Utrecht, Room Q.02.4.45, 3508 GA Utrecht, P.O. Box 85500, The Netherlands.
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center Nijmegen, The Netherlands
| | - Pim A de Jong
- Department of Radiology, University Medical Center Utrecht, The Netherlands; Utrecht University, The Netherlands
| | - Ivana Išgum
- Image Sciences Institute, University Medical Center Utrecht, Room Q.02.4.45, 3508 GA Utrecht, P.O. Box 85500, The Netherlands
| |
Collapse
|
28
|
Mahdy LN, Ezzat KA, Hassanien AE. Automatic detection System for Degenerative Disk and simulation for artificial disc replacement surgery in the spine. ISA TRANSACTIONS 2018; 81:244-258. [PMID: 30017173 DOI: 10.1016/j.isatra.2018.07.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2018] [Revised: 06/16/2018] [Accepted: 07/09/2018] [Indexed: 06/08/2023]
Abstract
This paper presents an automatic detection system for the degenerative disc and simulates the artificial disc replacement surgery in the spine. It starts by visualizing the Digital Imaging and Communications in Medicine (DICOM) in three views and then identifies regions for the lumber or sacral, followed by applying an adaptive threshold and modified region growing techniques to separate the spine from the surrounding tissues, organs, and bones. Then, segment each vertebral in the spine using K-means clustering algorithm. Then, the distance of intervertebral space was computed to detect the degenerative disk automatically in the lumbar region. Finally, the replacement surgery simulation occurred to design the disc with exact dimensions and identifies the appropriate place that will be implanted in the spine. The proposed system was tested on ten 3D computed tomography (CT) images downloaded from the Cancer Imaging Archive (TCIA). It was accessible to implant the artificial disk instead of the degenerative disk in the space that was specified automatically by the proposed system. The experimental outcomes positively demonstrate that the intended system is efficacious in the detection of degenerative disc and replacement of artificial disc simulation.
Collapse
Affiliation(s)
- Lamia Nabil Mahdy
- Higher Technological Institute, Biomedical Engineering Department, Egypt; Scientific Research Group in Egypt (SRGE), Egypt(1)
| | - Kadry Ali Ezzat
- Higher Technological Institute, Biomedical Engineering Department, Egypt; Scientific Research Group in Egypt (SRGE), Egypt(1)
| | - Aboul Ella Hassanien
- Faculty of Computers and Information, Cairo University, Cairo, Egypt; Scientific Research Group in Egypt (SRGE), Egypt(1).
| |
Collapse
|
29
|
Elhalawani H, Lin TA, Volpe S, Mohamed ASR, White AL, Zafereo J, Wong AJ, Berends JE, AboHashem S, Williams B, Aymard JM, Kanwar A, Perni S, Rock CD, Cooksey L, Campbell S, Yang P, Nguyen K, Ger RB, Cardenas CE, Fave XJ, Sansone C, Piantadosi G, Marrone S, Liu R, Huang C, Yu K, Li T, Yu Y, Zhang Y, Zhu H, Morris JS, Baladandayuthapani V, Shumway JW, Ghosh A, Pöhlmann A, Phoulady HA, Goyal V, Canahuate G, Marai GE, Vock D, Lai SY, Mackin DS, Court LE, Freymann J, Farahani K, Kaplathy-Cramer J, Fuller CD. Machine Learning Applications in Head and Neck Radiation Oncology: Lessons From Open-Source Radiomics Challenges. Front Oncol 2018; 8:294. [PMID: 30175071 PMCID: PMC6107800 DOI: 10.3389/fonc.2018.00294] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Accepted: 07/16/2018] [Indexed: 12/13/2022] Open
Abstract
Radiomics leverages existing image datasets to provide non-visible data extraction via image post-processing, with the aim of identifying prognostic, and predictive imaging features at a sub-region of interest level. However, the application of radiomics is hampered by several challenges such as lack of image acquisition/analysis method standardization, impeding generalizability. As of yet, radiomics remains intriguing, but not clinically validated. We aimed to test the feasibility of a non-custom-constructed platform for disseminating existing large, standardized databases across institutions for promoting radiomics studies. Hence, University of Texas MD Anderson Cancer Center organized two public radiomics challenges in head and neck radiation oncology domain. This was done in conjunction with MICCAI 2016 satellite symposium using Kaggle-in-Class, a machine-learning and predictive analytics platform. We drew on clinical data matched to radiomics data derived from diagnostic contrast-enhanced computed tomography (CECT) images in a dataset of 315 patients with oropharyngeal cancer. Contestants were tasked to develop models for (i) classifying patients according to their human papillomavirus status, or (ii) predicting local tumor recurrence, following radiotherapy. Data were split into training, and test sets. Seventeen teams from various professional domains participated in one or both of the challenges. This review paper was based on the contestants' feedback; provided by 8 contestants only (47%). Six contestants (75%) incorporated extracted radiomics features into their predictive model building, either alone (n = 5; 62.5%), as was the case with the winner of the "HPV" challenge, or in conjunction with matched clinical attributes (n = 2; 25%). Only 23% of contestants, notably, including the winner of the "local recurrence" challenge, built their model relying solely on clinical data. In addition to the value of the integration of machine learning into clinical decision-making, our experience sheds light on challenges in sharing and directing existing datasets toward clinical applications of radiomics, including hyper-dimensionality of the clinical/imaging data attributes. Our experience may help guide researchers to create a framework for sharing and reuse of already published data that we believe will ultimately accelerate the pace of clinical applications of radiomics; both in challenge or clinical settings.
Collapse
Affiliation(s)
- Hesham Elhalawani
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Timothy A. Lin
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Baylor College of Medicine, Houston, TX, United States
| | - Stefania Volpe
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Università degli Studi di Milano, Milan, Italy
| | - Abdallah S. R. Mohamed
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Department of Clinical Oncology and Nuclear Medicine, Alexandria University, Alexandria, Egypt
| | - Aubrey L. White
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- McGovern Medical School, University of Texas, Houston, TX, United States
| | - James Zafereo
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- McGovern Medical School, University of Texas, Houston, TX, United States
| | - Andrew J. Wong
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- School of Medicine, The University of Texas Health Science Center San Antonio, San Antonio, TX, United States
| | - Joel E. Berends
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- School of Medicine, The University of Texas Health Science Center San Antonio, San Antonio, TX, United States
| | - Shady AboHashem
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Department of Cardiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| | - Bowman Williams
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Furman University, Greenville, SC, United States
| | - Jeremy M. Aymard
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Abilene Christian University, Abilene, TX, United States
| | - Aasheesh Kanwar
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Department of Radiation Oncology, Oregon Health and Science University, Portland, OR, United States
| | - Subha Perni
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Crosby D. Rock
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Texas Tech University Health Sciences Center El Paso, El Paso, TX, United States
| | - Luke Cooksey
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- University of North Texas Health Science Center, Fort Worth, TX, United States
| | - Shauna Campbell
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Department of Radiation Oncology, Cleveland Clinic, Cleveland, OH, United States
| | - Pei Yang
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Baylor College of Medicine, Houston, TX, United States
| | - Khahn Nguyen
- Colgate University, Hamilton City, CA, United States
| | - Rachel B. Ger
- Graduate School of Biomedical Sciences, MD Anderson Cancer Center, Houston, TX, United States
- Department of Radiation Physics, Graduate School of Biomedical Sciences, MD Anderson Cancer Center, Houston, TX, United States
| | - Carlos E. Cardenas
- Graduate School of Biomedical Sciences, MD Anderson Cancer Center, Houston, TX, United States
- Department of Radiation Physics, Graduate School of Biomedical Sciences, MD Anderson Cancer Center, Houston, TX, United States
| | - Xenia J. Fave
- Moores Cancer Center, University of California, La Jolla, San Diego, CA, United States
| | - Carlo Sansone
- Dipartimento di Ingegneria Elettrica e delle Tecnologie dell'Informazione, Università Degli Studi di Napoli Federico II, Naples, Italy
| | - Gabriele Piantadosi
- Dipartimento di Ingegneria Elettrica e delle Tecnologie dell'Informazione, Università Degli Studi di Napoli Federico II, Naples, Italy
| | - Stefano Marrone
- Dipartimento di Ingegneria Elettrica e delle Tecnologie dell'Informazione, Università Degli Studi di Napoli Federico II, Naples, Italy
| | - Rongjie Liu
- Baylor College of Medicine, Houston, TX, United States
- Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Chao Huang
- Baylor College of Medicine, Houston, TX, United States
- Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Kaixian Yu
- Baylor College of Medicine, Houston, TX, United States
- Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Tengfei Li
- Baylor College of Medicine, Houston, TX, United States
- Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Yang Yu
- Baylor College of Medicine, Houston, TX, United States
- Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Youyi Zhang
- Baylor College of Medicine, Houston, TX, United States
- Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Hongtu Zhu
- Baylor College of Medicine, Houston, TX, United States
- Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Jeffrey S. Morris
- Baylor College of Medicine, Houston, TX, United States
- Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Veerabhadran Baladandayuthapani
- Baylor College of Medicine, Houston, TX, United States
- Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - John W. Shumway
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Alakonanda Ghosh
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Andrei Pöhlmann
- Fraunhofer-Institut für Fabrikbetrieb und Automatisierung (IFF), Magdeburg, Germany
| | - Hady A. Phoulady
- Department of Computer Science, University of Southern Maine, Portland, OR, United States
| | - Vibhas Goyal
- Indian Institute of Technology Hyderabad, Sangareddy, India
| | | | | | - David Vock
- Department of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN, United States
| | - Stephen Y. Lai
- Department of Head and Neck Surgery, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Dennis S. Mackin
- Colgate University, Hamilton City, CA, United States
- Department of Radiation Physics, Graduate School of Biomedical Sciences, MD Anderson Cancer Center, Houston, TX, United States
| | - Laurence E. Court
- Colgate University, Hamilton City, CA, United States
- Department of Radiation Physics, Graduate School of Biomedical Sciences, MD Anderson Cancer Center, Houston, TX, United States
| | - John Freymann
- Frederick National Laboratory for Cancer Research, Leidos Biomedical Research, Inc., Frederick, MD, United States
| | - Keyvan Farahani
- National Cancer Institute, Rockville, MD, United States
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, MD, United States
| | - Jayashree Kaplathy-Cramer
- Department of Radiology and Athinoula A. Martinos Center for Biomedical Imaging, MGH/Harvard Medical School, Boston, MA, United States
| | - Clifton D. Fuller
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Baylor College of Medicine, Houston, TX, United States
- Department of Radiation Physics, Graduate School of Biomedical Sciences, MD Anderson Cancer Center, Houston, TX, United States
| |
Collapse
|
30
|
Wu J, Tha KK, Xing L, Li R. Radiomics and radiogenomics for precision radiotherapy. JOURNAL OF RADIATION RESEARCH 2018; 59:i25-i31. [PMID: 29385618 PMCID: PMC5868194 DOI: 10.1093/jrr/rrx102] [Citation(s) in RCA: 69] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 12/14/2017] [Indexed: 06/07/2023]
Abstract
Imaging plays an important role in the diagnosis and staging of cancer, as well as in radiation treatment planning and evaluation of therapeutic response. Recently, there has been significant interest in extracting quantitative information from clinical standard-of-care images, i.e. radiomics, in order to provide a more comprehensive characterization of image phenotypes of the tumor. A number of studies have demonstrated that a deeper radiomic analysis can reveal novel image features that could provide useful diagnostic, prognostic or predictive information, improving upon currently used imaging metrics such as tumor size and volume. Furthermore, these imaging-derived phenotypes can be linked with genomic data, i.e. radiogenomics, in order to understand their biological underpinnings or further improve the prediction accuracy of clinical outcomes. In this article, we will provide an overview of radiomics and radiogenomics, including their rationale, technical and clinical aspects. We will also present some examples of the current results and some emerging paradigms in radiomics and radiogenomics for clinical oncology, with a focus on potential applications in radiotherapy. Finally, we will highlight the challenges in the field and suggest possible future directions in radiomics to maximize its potential impact on precision radiotherapy.
Collapse
Affiliation(s)
- Jia Wu
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA 94305-5847, USA
| | - Khin Khin Tha
- Global Station for Quantum Biomedical Science and Engineering, Global Institute for Cooperative Research and Education, Hokkaido University, Kita 15, Nishi 7, Kita-ku, Sapporo 060-8638, Japan
| | - Lei Xing
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA 94305-5847, USA
- Global Station for Quantum Biomedical Science and Engineering, Global Institute for Cooperative Research and Education, Hokkaido University, Kita 15, Nishi 7, Kita-ku, Sapporo 060-8638, Japan
| | - Ruijiang Li
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA 94305-5847, USA
- Global Station for Quantum Biomedical Science and Engineering, Global Institute for Cooperative Research and Education, Hokkaido University, Kita 15, Nishi 7, Kita-ku, Sapporo 060-8638, Japan
| |
Collapse
|
31
|
Ibragimov B, Toesca D, Chang D, Koong A, Xing L. Combining deep learning with anatomical analysis for segmentation of the portal vein for liver SBRT planning. Phys Med Biol 2017; 62:8943-8958. [PMID: 28994665 DOI: 10.1088/1361-6560/aa9262] [Citation(s) in RCA: 61] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Automated segmentation of the portal vein (PV) for liver radiotherapy planning is a challenging task due to potentially low vasculature contrast, complex PV anatomy and image artifacts originated from fiducial markers and vasculature stents. In this paper, we propose a novel framework for automated segmentation of the PV from computed tomography (CT) images. We apply convolutional neural networks (CNNs) to learn the consistent appearance patterns of the PV using a training set of CT images with reference annotations and then enhance the PV in previously unseen CT images. Markov random fields (MRFs) were further used to smooth the results of the enhancement of the CNN enhancement and remove isolated mis-segmented regions. Finally, CNN-MRF-based enhancement was augmented with PV centerline detection that relied on PV anatomical properties such as tubularity and branch composition. The framework was validated on a clinical database with 72 CT images of patients scheduled for liver stereotactic body radiation therapy. The obtained accuracy of the segmentation was [Formula: see text] 0.83 and [Formula: see text] 1.08 mm in terms of the median Dice coefficient and mean symmetric surface distance, respectively, when segmentation is encompassed into the PV region of interest. The obtained results indicate that CNNs and anatomical analysis can be used for the accurate segmentation of the PV and potentially integrated into liver radiation therapy planning.
Collapse
Affiliation(s)
- Bulat Ibragimov
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Palo Alto, CA 94305, United States of America
| | | | | | | | | |
Collapse
|