1
|
Ye K, Sun W, Tao R, Zheng G. A Projective-Geometry-Aware Network for 3D Vertebra Localization in Calibrated Biplanar X-Ray Images. SENSORS (BASEL, SWITZERLAND) 2025; 25:1123. [PMID: 40006352 PMCID: PMC11858964 DOI: 10.3390/s25041123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2025] [Revised: 02/07/2025] [Accepted: 02/12/2025] [Indexed: 02/27/2025]
Abstract
Current Deep Learning (DL)-based methods for vertebra localization in biplanar X-ray images mainly focus on two-dimensional (2D) information and neglect the projective geometry, limiting the accuracy of 3D navigation in X-ray-guided spine surgery. A 3D vertebra localization method from calibrated biplanar X-ray images is highly desired to address the problem. In this study, a projective-geometry-aware network for localizing 3D vertebrae in calibrated biplanar X-ray images, referred to as ProVLNet, is proposed. The network design of ProVLNet features three components: a Siamese 2D feature extractor to extract local appearance features from the biplanar X-ray images, a spatial alignment fusion module to incorporate the projective geometry in fusing the extracted 2D features in 3D space, and a 3D landmark regression module to regress the 3D coordinates of the vertebrae from the 3D fused features. Evaluated on two typical and challenging datasets acquired from the lumbar and the thoracic spine, ProVLNet achieved an identification rate of 99.53% and 98.98% and a point-to-point error of 0.64 mm and 1.38 mm, demonstrating superior performance of our proposed approach over the state-of-the-art (SOTA) methods.
Collapse
Affiliation(s)
| | | | | | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; (K.Y.); (W.S.); (R.T.)
| |
Collapse
|
2
|
Kang Z, Shi G, Zhu Y, Li F, Li X, Wang H. Development of a model for measuring sagittal plane parameters in 10-18-year old adolescents with idiopathic scoliosis based on RTMpose deep learning technology. J Orthop Surg Res 2025; 20:41. [PMID: 39799363 PMCID: PMC11724490 DOI: 10.1186/s13018-024-05334-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Accepted: 12/03/2024] [Indexed: 01/15/2025] Open
Abstract
PURPOSE The study aimed to develop a deep learning model for rapid, automated measurement of full-spine X-rays in adolescents with Adolescent Idiopathic Scoliosis (AIS). A significant challenge in this field is the time-consuming nature of manual measurements and the inter-individual variability in these measurements. To address these challenges, we utilized RTMpose deep learning technology to automate the process. METHODS We conducted a retrospective multicenter diagnostic study using 560 full-spine sagittal plane X-ray images from five hospitals in Inner Mongolia. The model was trained and validated using 500 images, with an additional 60 images for independent external validation. We evaluated the consistency of keypoint annotations among different physicians, the accuracy of model-predicted keypoints, and the accuracy of model measurement results compared to manual measurements. RESULTS The consistency percentages of keypoint annotations among different physicians and the model were 90-97% within the 4-mm range. The model's prediction accuracies for key points were 91-100% within the 4-mm range compared to the reference standards. The model's predictions for 15 anatomical parameters showed high consistency with experienced physicians, with intraclass correlation coefficients ranging from 0.892 to 0.991. The mean absolute error for SVA was 1.16 mm, and for other parameters, it ranged from 0.22° to 3.32°. A significant challenge we faced was the variability in data formats and specifications across different hospitals, which we addressed through data augmentation techniques. The model took an average of 9.27 s to automatically measure the 15 anatomical parameters per X-ray image. CONCLUSION The deep learning model based on RTMpose can effectively enhance clinical efficiency by automatically measuring the sagittal plane parameters of the spine in X-rays of patients with AIS. The model's performance was found to be highly consistent with manual measurements by experienced physicians, offering a valuable tool for clinical diagnostics.
Collapse
Affiliation(s)
- Zhijie Kang
- Department of Human Anatomy, Graduate School, Inner Mongolia Medical University, Hohhot, 010010, Inner Mongolia, China
| | - Guopeng Shi
- Department of Human Anatomy, Graduate School, Inner Mongolia Medical University, Hohhot, 010010, Inner Mongolia, China
| | - Yong Zhu
- Tumor Hospital, Affiliated to Inner Mongolia Medical University, Inner Mongolia Medical University, Hohhot, 010000, Inner Mongolia, China
| | - Feng Li
- Department of Spinal Surgery, The Second Affiliated Hospital of Inner Mongolia Medical University, Hohhot, 010000, Inner Mongolia, China
| | - Xiaohe Li
- Department of Human Anatomy, Graduate School, Inner Mongolia Medical University, Hohhot, 010010, Inner Mongolia, China.
| | - Haiyan Wang
- Department of Human Anatomy, Graduate School, Inner Mongolia Medical University, Hohhot, 010010, Inner Mongolia, China.
| |
Collapse
|
3
|
Chen W, Han Y, Awais Ashraf M, Liu J, Zhang M, Su F, Huang Z, Wong KK. A patch-based deep learning MRI segmentation model for improving efficiency and clinical examination of the spinal tumor. J Bone Oncol 2024; 49:100649. [PMID: 39659517 PMCID: PMC11629321 DOI: 10.1016/j.jbo.2024.100649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 11/02/2024] [Accepted: 11/05/2024] [Indexed: 12/12/2024] Open
Abstract
Background and objective Magnetic resonance imaging (MRI) plays a vital role in diagnosing spinal diseases, including different types of spinal tumors. However, conventional segmentation techniques are often labor-intensive and susceptible to variability. This study aims to propose a full-automatic segmentation method for spine MRI images, utilizing a convolutional-deconvolution neural network and patch-based deep learning. The objective is to improve segmentation efficiency, meeting clinical needs for accurate diagnoses and treatment planning. Methods The methodology involved the utilization of a convolutional neural network to automatically extract deep learning features from spine data. This allowed for the effective representation of anatomical structures. The network was trained to learn discriminative features necessary for accurate segmentation of the spine MRI data. Furthermore, a patch extraction (PE) based deep neural network was developed using a convolutional neural network to restore the feature maps to their original image size. To improve training efficiency, a combination of pre-training and an enhanced stochastic gradient descent method was utilized. Results The experimental results highlight the effectiveness of the proposed method for spine image segmentation using Gadolinium-enhanced T1 MRI. This approach not only delivers high accuracy but also offers real-time performance. The innovative model attained impressive metrics, achieving 90.6% precision, 91.1% recall, 93.2% accuracy, 91.3% F1-score, 83.8% Intersection over Union (IoU), and 91.1% Dice Coefficient (DC). These results indicate that the proposed method can accurately segment spine tumors CT images, addressing the limitations of traditional segmentation algorithms. Conclusion In conclusion, this study introduces a fully automated segmentation method for spine MRI images utilizing a convolutional neural network, enhanced by the application of the PE-module. By utilizing a patch extraction based neural network (PENN) deep learning techniques, the proposed method effectively addresses the deficiencies of traditional algorithms and achieves accurate and real-time spine MRI image segmentation.
Collapse
Affiliation(s)
- Weimin Chen
- School of Information and Electronics, Hunan City University, Yiyang, Hunan 413000, China
| | - Yong Han
- School of Design, Quanzhou University of Information Engineering, Quanzhou, Fujian 362000, China
| | - Muhammad Awais Ashraf
- Department of Mechanical Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
| | - Junhan Liu
- School of Design, Quanzhou University of Information Engineering, Quanzhou, Fujian 362000, China
| | - Mu Zhang
- Department of Emergency, Xiangya Hospital, Central South University, Changsha, Hunan 410008, China
| | - Feng Su
- Department of Emergency, Xiangya Hospital, Central South University, Changsha, Hunan 410008, China
| | - Zhiguo Huang
- Department of Emergency, Xiangya Hospital, Central South University, Changsha, Hunan 410008, China
| | - Kelvin K.L. Wong
- School of Information and Electronics, Hunan City University, Yiyang, Hunan 413000, China
| |
Collapse
|
4
|
Li C, Zhang G, Zhao B, Xie D, Du H, Duan X, Hu Y, Zhang L. Advances of surgical robotics: image-guided classification and application. Natl Sci Rev 2024; 11:nwae186. [PMID: 39144738 PMCID: PMC11321255 DOI: 10.1093/nsr/nwae186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 04/19/2024] [Accepted: 05/07/2024] [Indexed: 08/16/2024] Open
Abstract
Surgical robotics application in the field of minimally invasive surgery has developed rapidly and has been attracting increasingly more research attention in recent years. A common consensus has been reached that surgical procedures are to become less traumatic and with the implementation of more intelligence and higher autonomy, which is a serious challenge faced by the environmental sensing capabilities of robotic systems. One of the main sources of environmental information for robots are images, which are the basis of robot vision. In this review article, we divide clinical image into direct and indirect based on the object of information acquisition, and into continuous, intermittent continuous, and discontinuous according to the target-tracking frequency. The characteristics and applications of the existing surgical robots in each category are introduced based on these two dimensions. Our purpose in conducting this review was to analyze, summarize, and discuss the current evidence on the general rules on the application of image technologies for medical purposes. Our analysis gives insight and provides guidance conducive to the development of more advanced surgical robotics systems in the future.
Collapse
Affiliation(s)
- Changsheng Li
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Gongzi Zhang
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
| | - Baoliang Zhao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Dongsheng Xie
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Hailong Du
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
| | - Xingguang Duan
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Ying Hu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Lihai Zhang
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
5
|
Chen Y, Gao Y, Fu X, Chen Y, Wu J, Guo C, Li X. Automatic 3D reconstruction of vertebrae from orthogonal bi-planar radiographs. Sci Rep 2024; 14:16165. [PMID: 39003269 PMCID: PMC11246511 DOI: 10.1038/s41598-024-65795-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 06/24/2024] [Indexed: 07/15/2024] Open
Abstract
When conducting spine-related diagnosis and surgery, the three-dimensional (3D) upright posture of the spine under natural weight bearing is of significant clinical value for physicians to analyze the force on the spine. However, existing medical imaging technologies cannot meet current requirements of medical service. On the one hand, the mainstream 3D volumetric imaging modalities (e.g. CT and MRI) require patients to lie down during the imaging process. On the other hand, the imaging modalities conducted in an upright posture (e.g. radiograph) can only realize 2D projections, which lose the valid information of spinal anatomy and curvature. Developments of deep learning-based 3D reconstruction methods bring potential to overcome the limitations of the existing medical imaging technologies. To deal with the limitations of current medical imaging technologies as is described above, in this paper, we propose a novel deep learning framework, ReVerteR, which can realize automatic 3D Reconstruction of Vertebrae from orthogonal bi-planar Radiographs. With the utilization of self-attention mechanism and specially designed loss function combining Dice, Hausdorff, Focal, and MSE, ReVerteR can alleviate the sample-imbalance problem during the reconstruction process and realize the fusion of the centroid annotation and the focused vertebra. Furthermore, aiming at automatic and customized 3D spinal reconstruction in real-world scenarios, we extend ReVerteR to a clinical deployment-oriented framework, and develop an interactive interface with all functions in the framework integrated so as to enhance human-computer interaction during clinical decision-making. Extensive experiments and visualization conducted on our constructed datasets based on two benchmark datasets of spinal CT, VerSe 2019 and VerSe 2020, demonstrate the effectiveness of our proposed ReVerteR. In this paper, we propose an automatic 3D reconstruction method of vertebrae based on orthogonal bi-planar radiographs. With the 3D upright posture of the spine under natural weight bearing effectively constructed, our proposed method is expected to better support doctors make clinical decision during spine-related diagnosis and surgery.
Collapse
Affiliation(s)
- Yuepeng Chen
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China
- Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China
- Institute for Intelligent Healthcare, Tsinghua University, Beijing, 100084, China
| | - Yue Gao
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China
- Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China
| | - Xiangling Fu
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China.
- Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China.
| | - Yingyin Chen
- Guangdong Provincial Key Laboratory of Tumor Interventional Diagnosis and Treatment, Zhuhai People's Hospital, Zhuhai, 519000, China
| | - Ji Wu
- Institute for Intelligent Healthcare, Tsinghua University, Beijing, 100084, China.
- Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China.
- College of AI, Tsinghua University, Beijing, 100084, China.
| | - Chenyi Guo
- Institute for Intelligent Healthcare, Tsinghua University, Beijing, 100084, China.
- Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China.
| | - Xiaodong Li
- Department of Spine and Osteology, Zhuhai People's Hospital, Zhuhai, 519000, China.
| |
Collapse
|
6
|
Chen W, Khodaei M, Reformat M, Lou E. Validity of a fast automated 3d spine reconstruction measurements for biplanar radiographs: SOSORT 2024 award winner. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2024:10.1007/s00586-024-08375-7. [PMID: 38926172 DOI: 10.1007/s00586-024-08375-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Revised: 06/13/2024] [Accepted: 06/14/2024] [Indexed: 06/28/2024]
Abstract
PURPOSE To validate a fast 3D biplanar spinal radiograph reconstruction method with automatic extract curvature parameters using artificial intelligence (AI). METHODS Three-hundred eighty paired, posteroanterior and lateral, radiographs from the EOS X-ray system of children with adolescent idiopathic scoliosis were randomly selected from the database. For the AI model development, 304 paired images were used for training; 76 pairs were employed for testing. The validation was evaluated by comparing curvature parameters, including Cobb angles (CA), apical axial vertebral rotation (AVR), kyphotic angle (T1-T12 KA), and lordotic angle (L1-L5 LA), to manual measurements from a rater with 8 years of scoliosis experience. The mean absolute differences ± standard deviation (MAD ± SD), the percentage of measurements within the clinically acceptable errors, the standard error of measurement (SEM), and the inter-method intraclass correlation coefficient ICC[2,1] were calculated. The average reconstruction speed of the 76 test images was recorded. RESULTS Among the 76 test images, 134 and 128 CA were exported automatically and measured manually, respectively. The MAD ± SD for CA, AVR at apex, KA, and LA were 3.3° ± 3.5°, 1.5° ± 1.5°, 3.3° ± 2.6° and 3.5° ± 2.5°, respectively, and 98% of these measurements were within the clinical acceptance errors. The SEMs and the ICC[2,1] for the compared parameters were all less than 0.7° and > 0.94, respectively. The average time to display the 3D spine and report the measurements was 5.2 ± 1.3 s. CONCLUSION The developed AI algorithm could reconstruct a 3D scoliotic spine within 6 s, and the automatic curvature parameters were accurately and reliably extracted from the reconstructed images.
Collapse
Affiliation(s)
- Weiying Chen
- Department of Electrical and Computer Engineering, University of Alberta, Donadeo ICE 11-263, 9211-116 Street NW, Edmonton, AB, T6G 1H9, Canada
| | - Mahdieh Khodaei
- Department of Electrical and Computer Engineering, University of Alberta, Donadeo ICE 11-263, 9211-116 Street NW, Edmonton, AB, T6G 1H9, Canada
| | - Marek Reformat
- Department of Electrical and Computer Engineering, University of Alberta, Donadeo ICE 11-263, 9211-116 Street NW, Edmonton, AB, T6G 1H9, Canada
| | - Edmond Lou
- Department of Electrical and Computer Engineering, University of Alberta, Donadeo ICE 11-263, 9211-116 Street NW, Edmonton, AB, T6G 1H9, Canada.
| |
Collapse
|
7
|
Zhu D, Wang D, Chen Y, Xu Z, He B. Research on Three-Dimensional Reconstruction of Ribs Based on Point Cloud Adaptive Smoothing Denoising. SENSORS (BASEL, SWITZERLAND) 2024; 24:4076. [PMID: 39000855 PMCID: PMC11244516 DOI: 10.3390/s24134076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 06/09/2024] [Accepted: 06/19/2024] [Indexed: 07/16/2024]
Abstract
The traditional methods for 3D reconstruction mainly involve using image processing techniques or deep learning segmentation models for rib extraction. After post-processing, voxel-based rib reconstruction is achieved. However, these methods suffer from limited reconstruction accuracy and low computational efficiency. To overcome these limitations, this paper proposes a 3D rib reconstruction method based on point cloud adaptive smoothing and denoising. We converted voxel data from CT images to multi-attribute point cloud data. Then, we applied point cloud adaptive smoothing and denoising methods to eliminate noise and non-rib points in the point cloud. Additionally, efficient 3D reconstruction and post-processing techniques were employed to achieve high-accuracy and comprehensive 3D rib reconstruction results. Experimental calculations demonstrated that compared to voxel-based 3D rib reconstruction methods, the 3D rib models generated by the proposed method achieved a 40% improvement in reconstruction accuracy and were twice as efficient as the former.
Collapse
Affiliation(s)
- Darong Zhu
- School of Automation (School of Artificial Intelligence), Hangzhou Dianzi University, Hangzhou 310018, China
- Affiliated Hangzhou First People's Hospital, School of Medicine, Westlake University, Hangzhou 310024, China
| | - Diao Wang
- School of Automation (School of Artificial Intelligence), Hangzhou Dianzi University, Hangzhou 310018, China
| | - Yuanjiao Chen
- School of Automation (School of Artificial Intelligence), Hangzhou Dianzi University, Hangzhou 310018, China
| | - Zhe Xu
- School of Automation (School of Artificial Intelligence), Hangzhou Dianzi University, Hangzhou 310018, China
| | - Bishi He
- School of Automation (School of Artificial Intelligence), Hangzhou Dianzi University, Hangzhou 310018, China
| |
Collapse
|
8
|
Zanier O, Theiler S, Mutten RD, Ryu SJ, Regli L, Serra C, Staartjes VE. TomoRay: Generating Synthetic Computed Tomography of the Spine From Biplanar Radiographs. Neurospine 2024; 21:68-75. [PMID: 38317547 PMCID: PMC10992629 DOI: 10.14245/ns.2347158.579] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 12/22/2023] [Accepted: 12/30/2023] [Indexed: 02/07/2024] Open
Abstract
OBJECTIVE Computed tomography (CT) imaging is a cornerstone in the assessment of patients with spinal trauma and in the planning of spinal interventions. However, CT studies are associated with logistical problems, acquisition costs, and radiation exposure. In this proof-of-concept study, the feasibility of generating synthetic spinal CT images using biplanar radiographs was explored. This could expand the potential applications of x-ray machines pre-, post-, and even intraoperatively. METHODS A cohort of 209 patients who underwent spinal CT imaging from the VerSe2020 dataset was used to train the algorithm. The model was subsequently evaluated using an internal and external validation set containing 55 from the VerSe2020 dataset and a subset of 56 images from the CTSpine1K dataset, respectively. Digitally reconstructed radiographs served as input for training and evaluation of the 2-dimensional (2D)-to-3-dimentional (3D) generative adversarial model. Model performance was assessed using peak signal to noise ratio (PSNR), structural similarity index (SSIM), and cosine similarity (CS). RESULTS At external validation, the developed model achieved a PSNR of 21.139 ± 1.018 dB (mean ± standard deviation). The SSIM and CS amounted to 0.947 ± 0.010 and 0.671 ± 0.691, respectively. CONCLUSION Generating an artificial 3D output from 2D imaging is challenging, especially for spinal imaging, where x-rays are known to deliver insufficient information frequently. Although the synthetic CT scans derived from our model do not perfectly match their ground truth CT, our proof-of-concept study warrants further exploration of the potential of this technology.
Collapse
Affiliation(s)
- Olivier Zanier
- Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zürich, University of Zürich, Zürich, Switzerland
| | - Sven Theiler
- Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zürich, University of Zürich, Zürich, Switzerland
| | - Raffaele Da Mutten
- Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zürich, University of Zürich, Zürich, Switzerland
| | - Seung-Jun Ryu
- Department of Neurosurgery, Daejeon Eulji University Hospital, Eulji University Medical School, Daejeon, Korea
| | - Luca Regli
- Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zürich, University of Zürich, Zürich, Switzerland
| | - Carlo Serra
- Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zürich, University of Zürich, Zürich, Switzerland
| | - Victor E. Staartjes
- Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zürich, University of Zürich, Zürich, Switzerland
| |
Collapse
|
9
|
Li B, Zhang J, Wang Q, Li H, Wang Q. Three-dimensional spine reconstruction from biplane radiographs using convolutional neural networks. Med Eng Phys 2024; 123:104088. [PMID: 38365341 DOI: 10.1016/j.medengphy.2023.104088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Revised: 12/04/2023] [Accepted: 12/10/2023] [Indexed: 02/18/2024]
Abstract
PURPOSE The purpose of this study was to develop and evaluate a deep learning network for three-dimensional reconstruction of the spine from biplanar radiographs. METHODS The proposed approach focused on extracting similar features and multiscale features of bone tissue in biplanar radiographs. Bone tissue features were reconstructed for feature representation across dimensions to generate three-dimensional volumes. The number of feature mappings was gradually reduced in the reconstruction to transform the high-dimensional features into the three-dimensional image domain. We produced and made eight public datasets to train and test the proposed network. Two evaluation metrics were proposed and combined with four classical evaluation metrics to measure the performance of the method. RESULTS In comparative experiments, the reconstruction results of this method achieved a Hausdorff distance of 1.85 mm, a surface overlap of 0.2 mm, a volume overlap of 0.9664, and an offset distance of only 0.21 mm from the vertebral body centroid. The results of this study indicate that the proposed method is reliable.
Collapse
Affiliation(s)
- Bo Li
- Department of Electronic Engineering, Yunnan University, Kunming, China
| | - Junhua Zhang
- Department of Electronic Engineering, Yunnan University, Kunming, China.
| | - Qian Wang
- Department of Electronic Engineering, Yunnan University, Kunming, China
| | - Hongjian Li
- The First People's Hospital of Yunnan Province, China
| | - Qiyang Wang
- The First People's Hospital of Yunnan Province, China
| |
Collapse
|
10
|
Nguyen TP, Kim JH, Kim SH, Yoon J, Choi SH. Machine Learning-Based Measurement of Regional and Global Spinal Parameters Using the Concept of Incidence Angle of Inflection Points. Bioengineering (Basel) 2023; 10:1236. [PMID: 37892966 PMCID: PMC10604057 DOI: 10.3390/bioengineering10101236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 10/05/2023] [Accepted: 10/18/2023] [Indexed: 10/29/2023] Open
Abstract
This study delves into the application of convolutional neural networks (CNNs) in evaluating spinal sagittal alignment, introducing the innovative concept of incidence angles of inflection points (IAIPs) as intuitive parameters to capture the interplay between pelvic and spinal alignment. Pioneering the fusion of IAIPs with machine learning for sagittal alignment analysis, this research scrutinized whole-spine lateral radiographs from hundreds of patients who visited a single institution, utilizing high-quality images for parameter assessments. Noteworthy findings revealed robust success rates for certain parameters, including pelvic and C2 incidence angles, but comparatively lower rates for sacral slope and L1 incidence. The proposed CNN-based machine learning method demonstrated remarkable efficiency, achieving an impressive 80 percent detection rate for various spinal angles, such as lumbar lordosis and thoracic kyphosis, with a precise error threshold of 3.5°. Further bolstering the study's credibility, measurements derived from the novel formula closely aligned with those directly extracted from the CNN model. In conclusion, this research underscores the utility of the CNN-based deep learning algorithm in delivering precise measurements of spinal sagittal parameters, and highlights the potential for integrating machine learning with the IAIP concept for comprehensive data accumulation in the domain of sagittal spinal alignment analysis, thus advancing our understanding of spinal health.
Collapse
Affiliation(s)
- Thong Phi Nguyen
- Department of Mechanical Engineering, BK21 FOUR ERICA-ACE Center, Hanyang University, 55 Hanyangdaehak-ro, Sangnok-gu, Ansan-si 15588, Gyeonggi-do, Republic of Korea
- Department of Mechanical Engineering, Hanyang University, 55 Hanyangdaehak-ro, Sangnok-gu, Ansan-si 15588, Gyeonggi-do, Republic of Korea
| | - Ji-Hwan Kim
- Department of Orthopedic Surgery, Hanyang University College of Medicine, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Republic of Korea
| | - Seong-Ha Kim
- Department of Orthopedic Surgery, Hanyang University College of Medicine, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Republic of Korea
| | - Jonghun Yoon
- Department of Mechanical Engineering, BK21 FOUR ERICA-ACE Center, Hanyang University, 55 Hanyangdaehak-ro, Sangnok-gu, Ansan-si 15588, Gyeonggi-do, Republic of Korea
- Department of Mechanical Engineering, Hanyang University, 55 Hanyangdaehak-ro, Sangnok-gu, Ansan-si 15588, Gyeonggi-do, Republic of Korea
- AIDICOME Inc., 221, 5th Engineering Building, 55 Hanyangdaehak-ro, Sangnok-gu, Ansan-si 15588, Gyeonggi-do, Republic of Korea
| | - Sung-Hoon Choi
- Department of Orthopedic Surgery, Hanyang University College of Medicine, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Republic of Korea
| |
Collapse
|
11
|
Loisel F, Durand S, Goubier JN, Bonnet X, Rouch P, Skalli W. Three-dimensional reconstruction of the hand from biplanar X-rays: Assessment of accuracy and reliability. Orthop Traumatol Surg Res 2023; 109:103403. [PMID: 36108817 DOI: 10.1016/j.otsr.2022.103403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 08/31/2021] [Accepted: 10/04/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND Functional disorders of the hand are generally investigated first using conventional radiographic imaging. However, X-rays (two-dimensional (2D)) provide limited information and the information may be reduced by overlapping bones and projection bias. This work presents a three-dimensional (3D) hand reconstruction method from biplanar X-rays. METHOD This approach consists of the deformation of a generic hand model on biplanar X-rays by manual and automatic processes. The reference examination being the manual CT segmentation, the precision of the method was evaluated by a comparison between the reconstructions from biplanar X-rays and the corresponding reconstructions from the CT scan (0.3mm section thickness). To assess the reproducibility of the method, 6 healthy hands (6 subjects, 3 left, 3 men) were considered. Two operators repeated each reconstruction from biplanar X-rays three times to study inter- and intra-operator variability. Three anatomical parameters that could be calculated automatically from the reconstructions were considered from the bone surfaces: the length of the scaphoid, the depth of the distal end of the radius and the height of the trapezius. RESULTS Double the root mean square error (2 Root Mean Square, 2RMS) at the point/area difference between biplanar X-rays and computed tomography reconstructions ranged from 0.46mm for the distal phalanges to 1.55mm for the bones of the distal carpals. The inter-intra-observer variability showed precision with a 95% confidence interval of less than 1.32mm for the anatomical parameters, and 2.12mm for the bone centroids. DISCUSSION The current method allows to obtain an accurate 3D reconstruction of the hand and wrist compared to the traditional segmented CT scan. By improving the automation of the method, objective information about the position of the bones in space could be obtained quickly. The value of this method lies in the early diagnosis of certain ligament pathologies (carpal instability) and it also has implications for surgical planning and personalized finite element modeling. LEVEL OF PROOF Basic sciences.
Collapse
Affiliation(s)
- François Loisel
- Orthopaedics, traumatology, plastic & reconstructive surgery unit, Hand surgery Unit, University Hospital J. Minjoz, Besançon, France; Institute of Human Biomechanics G. Charpak, National School of Arts and Crafts, Paris, France.
| | - Stan Durand
- Institute of Human Biomechanics G. Charpak, National School of Arts and Crafts, Paris, France
| | - Jean-Noël Goubier
- Institute of Brachial Plexus and Nerve Surgery, 92, boulevard de Courcelles 75017 Paris, France
| | - Xavier Bonnet
- Institute of Human Biomechanics G. Charpak, National School of Arts and Crafts, Paris, France
| | - Philippe Rouch
- Institute of Human Biomechanics G. Charpak, National School of Arts and Crafts, Paris, France
| | - Wafa Skalli
- Institute of Human Biomechanics G. Charpak, National School of Arts and Crafts, Paris, France
| |
Collapse
|
12
|
Huang K, Zhang J. Three-dimensional lumbar spine generation using variational autoencoder. Med Eng Phys 2023; 120:104046. [PMID: 37838400 DOI: 10.1016/j.medengphy.2023.104046] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Revised: 08/16/2023] [Accepted: 09/04/2023] [Indexed: 10/16/2023]
Abstract
The disease analysis of the lumbar spine often requires a large number of three-dimensional (3D) models. Currently, there is a lack of 3D model of the lumbar spine for research, especially for the diseases such as scoliosis where it is difficult to collect sufficient data in a short period of time. To solve this problem, we develop an end-to-end network based on 3D variational autoencoder for randomly generating 3D lumbar spine model. In this network, the dual path encoder structure is used to fit two individual variables, i.e., mean and variance. Spatial coordinate attention modules are added to the encoder to improve the learning ability of the network to the 3D spatial structure of the lumbar spine. To enhance the power of the network to reconstruct the lumbar spine, a regularization loss is added to constrain the distribution loss. Additionally, Gaussian noise layers are added to the decoder to improve the authenticity and diversity of generated model. The experiments were conducted on the data of the entire lumbar spine and the individual lumbar vertebra, respectively. The results showed that the voxel intersection over union was 0.588 and 0.684, the voxel Dice coefficient was 0.739 and 0.811, the average surface distance was 0.807 and 1.189, and the Hausdorff distance was 2.615 and 3.710, for the entire lumbar spine and individual lumbar vertebra, respectively. The developed approach is comparable to the most commonly used model generation method of statistical shape model (SSM) in both visual and objective indicators, while the developed approach does not require the landmarks that is needed in the SSM method. Therefore, this fully automatic method can be easily used for population-based modeling of the lumbar spine which has the potential to be a powerful clinical tool.
Collapse
Affiliation(s)
- Kun Huang
- School of Information Science and Engineering, Yunnan University, Kunming, China
| | - Junhua Zhang
- School of Information Science and Engineering, Yunnan University, Kunming, China.
| |
Collapse
|
13
|
Lee TY, Yang D, Lai KK, Castelein RM, Schlosser TPC, Chu W, Lam T, Zheng Y. Three-dimensional ultrasonography could be a potential non-ionizing tool to evaluate vertebral rotation of subjects with adolescent idiopathic scoliosis. JOR Spine 2023; 6:e1259. [PMID: 37780820 PMCID: PMC10540829 DOI: 10.1002/jsp2.1259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 03/07/2023] [Accepted: 04/25/2023] [Indexed: 10/03/2023] Open
Abstract
Background Three-dimensional (3D) ultrasonography is nonionizing and has been demonstrated to be a reliable tool for scoliosis assessment, including coronal and sagittal curvatures. It shows a great potential for axial vertebral rotation (AVR) evaluation, yet its validity and reliability need to be further demonstrated. Materials and Methods Twenty patients with adolescent idiopathic scoliosis (AIS) (coronal Cobb: 26.6 ± 9.1°) received 3D ultrasound scan for twice, 10 were scanned by the same operator, and the other 10 by different operators. EOS Bi-planar x-rays and 3D scan were conducted on another 29 patients on the same day. Two experienced 3D ultrasonographic researchers, with different experiences on AVR measurement, evaluated the 3D ultrasonographic AVR of the 29 patients (55 curves; coronal Cobb angle: 26.9 ± 11.3°). The gold standard AVR was determined from the 3D reconstruction of coronal and sagittal EOS radiographs. Intra-class correlation coefficients (ICCs), mean absolute difference (MAD), standard error measurements (SEM), and Bland-Altman's bias were reported to evaluate the intra-operator and inter-operator/rater reliabilities of 3D ultrasonography. The reliability of 3D ultrasonographic AVR measurements was further validated using inter-method with that of EOS. Results ICCs for intra-operator and inter-operator/rater reliability assessment were all greater than 0.95. MAD, SEM, and bias for the 3D ultrasonographic AVRs were no more than 2.2°, 2.0°, and 0.5°, respectively. AVRs between both modalities were strongly correlated (R 2 = 0.901) and not significantly different (p = 0.205). Bland-Altman plot also shows that the bias was less than 1°, with no proportional bias between the difference and mean of expected and radiographic Cobb angles. Conclusion This study demonstrates that 3D ultrasonography is valid and reliable to evaluate AVR in AIS patients. 3D ultrasonography can be a potential tool for screening and following up subjects with AIS and evaluating the effectiveness of nonsurgical treatments.
Collapse
Affiliation(s)
- Tin Yan Lee
- Department of Biomedical EngineeringThe Hong Kong Polytechnic UniversityHong KongHong Kong
- Research Institute for Smart AgeingThe Hong Kong Polytechnic UniversityHong KongHong Kong
| | - De Yang
- Department of Biomedical EngineeringThe Hong Kong Polytechnic UniversityHong KongHong Kong
| | - Kelly Ka‐Lee Lai
- Department of Biomedical EngineeringThe Hong Kong Polytechnic UniversityHong KongHong Kong
| | - Rene M. Castelein
- Department of Orthopaedic SurgeryUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Tom P. C. Schlosser
- Department of Orthopaedic SurgeryUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Winnie Chu
- Department of Imaging and Interventional RadiologyThe Chinese University of Hong KongHong Kong SARChina
| | - Tsz‐Ping Lam
- SH Ho Scoliosis Research Lab, Joint Scoliosis Research Center of the Chinese University of Hong Kong and Nanjing University, Department of Orthopaedics & Traumatology, The Chinese University of Hong KongHong KongHong Kong
| | - Yong‐Ping Zheng
- Department of Biomedical EngineeringThe Hong Kong Polytechnic UniversityHong KongHong Kong
- Research Institute for Smart AgeingThe Hong Kong Polytechnic UniversityHong KongHong Kong
| |
Collapse
|
14
|
Sarmah M, Neelima A, Singh HR. Survey of methods and principles in three-dimensional reconstruction from two-dimensional medical images. Vis Comput Ind Biomed Art 2023; 6:15. [PMID: 37495817 PMCID: PMC10371974 DOI: 10.1186/s42492-023-00142-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 06/27/2023] [Indexed: 07/28/2023] Open
Abstract
Three-dimensional (3D) reconstruction of human organs has gained attention in recent years due to advances in the Internet and graphics processing units. In the coming years, most patient care will shift toward this new paradigm. However, development of fast and accurate 3D models from medical images or a set of medical scans remains a daunting task due to the number of pre-processing steps involved, most of which are dependent on human expertise. In this review, a survey of pre-processing steps was conducted, and reconstruction techniques for several organs in medical diagnosis were studied. Various methods and principles related to 3D reconstruction were highlighted. The usefulness of 3D reconstruction of organs in medical diagnosis was also highlighted.
Collapse
Affiliation(s)
- Mriganka Sarmah
- Department of Computer Science and Engineering, National Institute of Technology, Nagaland, 797103, India.
| | - Arambam Neelima
- Department of Computer Science and Engineering, National Institute of Technology, Nagaland, 797103, India
| | - Heisnam Rohen Singh
- Department of Information Technology, Nagaland University, Nagaland, 797112, India
| |
Collapse
|
15
|
Sun W, Zhao Y, Liu J, Zheng G. LatentPCN: latent space-constrained point cloud network for reconstruction of 3D patient-specific bone surface models from calibrated biplanar X-ray images. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02877-3. [PMID: 37027083 DOI: 10.1007/s11548-023-02877-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 03/15/2023] [Indexed: 04/08/2023]
Abstract
PURPOSE Accurate three-dimensional (3D) models play crucial roles in computer assisted planning and interventions. MR or CT images are frequently used to derive 3D models but have the disadvantages that they are expensive or involving ionizing radiation (e.g., CT acquisition). An alternative method based on calibrated 2D biplanar X-ray images is highly desired. METHODS A point cloud network, referred as LatentPCN, is developed for reconstruction of 3D surface models from calibrated biplanar X-ray images. LatentPCN consists of three components: an encoder, a predictor, and a decoder. During training, a latent space is learned to represent shape features. After training, LatentPCN maps sparse silhouettes generated from 2D images to a latent representation, which is taken as the input to the decoder to derive a 3D bone surface model. Additionally, LatentPCN allows for estimation of a patient-specific reconstruction uncertainty. RESULTS We designed and conducted comprehensive experiments on datasets of 25 simulated cases and 10 cadaveric cases to evaluate the performance of LatentLCN. On these two datasets, the mean reconstruction errors achieved by LatentLCN were 0.83 mm and 0.92 mm, respectively. A correlation between large reconstruction errors and high uncertainty in the reconstruction results was observed. CONCLUSION LatentPCN can reconstruct patient-specific 3D surface models from calibrated 2D biplanar X-ray images with high accuracy and uncertainty estimation. The sub-millimeter reconstruction accuracy on cadaveric cases demonstrates its potential for surgical navigation applications.
Collapse
Affiliation(s)
- Wenyuan Sun
- Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road, Shanghai, 200240, China
| | - Yuyun Zhao
- Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road, Shanghai, 200240, China
| | - Jihao Liu
- Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road, Shanghai, 200240, China
| | - Guoyan Zheng
- Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road, Shanghai, 200240, China.
| |
Collapse
|
16
|
Aubert B, Cresson T, de Guise JA, Vazquez C. X-Ray to DRR Images Translation for Efficient Multiple Objects Similarity Measures in Deformable Model 3D/2D Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:897-909. [PMID: 36318556 DOI: 10.1109/tmi.2022.3218568] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The robustness and accuracy of the intensity-based 3D/2D registration of a 3D model on planar X-ray image(s) is related to the quality of the image correspondences between the digitally reconstructed radiographs (DRR) generated from the 3D models (varying image) and the X-ray images (fixed target). While much effort may be devoted to generating realistic DRR that are similar to real X-rays (using complex X-ray simulation, adding densities information in 3D models, etc.), significant differences still remain between DRR and real X-ray images. Differences such as the presence of adjacent or superimposed soft tissue and bony or foreign structures lead to image matching difficulties and decrease the 3D/2D registration performance. In the proposed method, the X-ray images were converted into DRR images using a GAN-based cross-modality image-to-images translation. With this added prior step of XRAY-to-DRR translation, standard similarity measures become efficient even when using simple and fast DRR projection. For both images to match, they must belong to the same image domain and essentially contain the same kind of information. The XRAY-to-DRR translation also addresses the well-known issue of registering an object in a scene composed of multiple objects by separating the superimposed or/and adjacent objects to avoid mismatching across similar structures. We applied the proposed method to the 3D/2D fine registration of vertebra deformable models to biplanar radiographs of the spine. We showed that the XRAY-to-DRR translation enhances the registration results, by increasing the capture range and decreasing dependence on the similarity measure choice since the multi-modal registration becomes mono-modal.
Collapse
|
17
|
New method to apply the lumbar lordosis of standing radiographs to supine CT-based virtual 3D lumbar spine models. Sci Rep 2022; 12:20382. [PMID: 36437349 PMCID: PMC9701766 DOI: 10.1038/s41598-022-24570-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Accepted: 11/17/2022] [Indexed: 11/29/2022] Open
Abstract
Standing radiographs play an important role in the characterization of spinal sagittal alignment, as they depict the spine under physiologic loading conditions. However, there is no commonly available method to apply the lumbar lordosis of standing radiographs to supine CT-based virtual 3D models of the lumbar spine. We aimed to develop a method for the sagittal rigid-body registration of vertebrae to standing radiographs, using the exact geometry reconstructed from CT-data. In a cohort of 50 patients with monosegmental spinal degeneration, segmentation and registration of the lumbar vertebrae and sacrum were performed by two independent investigators. Intersegmental angles and lumbar lordosis were measured both in CT scans and radiographs. Vertebrae were registered using the X-ray module of Materialise Mimics software. Postregistrational midsagittal sections were constructed of the sagittal midplane sections of the registered 3D lumbar spine geometries. Mean Hausdorff distance was measured between corresponding registered vertebral geometries. The registration process minimized the difference between the X-rays' and postregistrational midsagittal sections' lordoses. Intra- and inter-rater reliability was excellent based on angle and mean Hausdorff distance measurements. We propose an accessible, accurate, and reproducible method for creating patient-specific 3D geometries of the lumbar spine that accurately represent spinal sagittal alignment in the standing position.
Collapse
|
18
|
Jecklin S, Jancik C, Farshad M, Fürnstahl P, Esfandiari H. X23D-Intraoperative 3D Lumbar Spine Shape Reconstruction Based on Sparse Multi-View X-ray Data. J Imaging 2022; 8:271. [PMID: 36286365 PMCID: PMC9604813 DOI: 10.3390/jimaging8100271] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 09/07/2022] [Accepted: 09/27/2022] [Indexed: 11/16/2022] Open
Abstract
Visual assessment based on intraoperative 2D X-rays remains the predominant aid for intraoperative decision-making, surgical guidance, and error prevention. However, correctly assessing the 3D shape of complex anatomies, such as the spine, based on planar fluoroscopic images remains a challenge even for experienced surgeons. This work proposes a novel deep learning-based method to intraoperatively estimate the 3D shape of patients' lumbar vertebrae directly from sparse, multi-view X-ray data. High-quality and accurate 3D reconstructions were achieved with a learned multi-view stereo machine approach capable of incorporating the X-ray calibration parameters in the neural network. This strategy allowed a priori knowledge of the spinal shape to be acquired while preserving patient specificity and achieving a higher accuracy compared to the state of the art. Our method was trained and evaluated on 17,420 fluoroscopy images that were digitally reconstructed from the public CTSpine1K dataset. As evaluated by unseen data, we achieved an 88% average F1 score and a 71% surface score. Furthermore, by utilizing the calibration parameters of the input X-rays, our method outperformed a counterpart method in the state of the art by 22% in terms of surface score. This increase in accuracy opens new possibilities for surgical navigation and intraoperative decision-making solely based on intraoperative data, especially in surgical applications where the acquisition of 3D image data is not part of the standard clinical workflow.
Collapse
Affiliation(s)
- Sascha Jecklin
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Carla Jancik
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Mazda Farshad
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Philipp Fürnstahl
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Hooman Esfandiari
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| |
Collapse
|
19
|
Offiah AC. Current and emerging artificial intelligence applications for pediatric musculoskeletal radiology. Pediatr Radiol 2022; 52:2149-2158. [PMID: 34272573 PMCID: PMC9537230 DOI: 10.1007/s00247-021-05130-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 04/28/2021] [Accepted: 06/10/2021] [Indexed: 12/03/2022]
Abstract
Artificial intelligence (AI) is playing an ever-increasing role in radiology (more so in the adult world than in pediatrics), to the extent that there are unfounded fears it will completely take over the role of the radiologist. In relation to musculoskeletal applications of AI in pediatric radiology, we are far from the time when AI will replace radiologists; even for the commonest application (bone age assessment), AI is more often employed in an AI-assist mode rather than an AI-replace or AI-extend mode. AI for bone age assessment has been in clinical use for more than a decade and is the area in which most research has been conducted. Most other potential indications in children (such as appendicular and vertebral fracture detection) remain largely in the research domain. This article reviews the areas in which AI is most prominent in relation to the pediatric musculoskeletal system, briefly summarizing the current literature and highlighting areas for future research. Pediatric radiologists are encouraged to participate as members of the research teams conducting pediatric radiology artificial intelligence research.
Collapse
Affiliation(s)
- Amaka C Offiah
- Department of Oncology and Metabolism, University of Sheffield, Damer Street Building, Sheffield, S10 2TH, UK.
- Department of Radiology, Sheffield Children's NHS Foundation Trust, Sheffield, UK.
| |
Collapse
|
20
|
Spinopelvic measurements of sagittal balance with deep learning: systematic review and critical evaluation. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2022; 31:2031-2045. [PMID: 35278146 DOI: 10.1007/s00586-022-07155-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 02/04/2022] [Accepted: 02/14/2022] [Indexed: 01/20/2023]
Abstract
PURPOSE To summarize and critically evaluate the existing studies for spinopelvic measurements of sagittal balance that are based on deep learning (DL). METHODS Three databases (PubMed, WoS and Scopus) were queried for records using keywords related to DL and measurement of sagittal balance. After screening the resulting 529 records that were augmented with specific web search, 34 studies published between 2017 and 2022 were included in the final review, and evaluated from the perspective of the observed sagittal spinopelvic parameters, properties of spine image datasets, applied DL methodology and resulting measurement performance. RESULTS Studies reported DL measurement of up to 18 different spinopelvic parameters, but the actual number depended on the image field of view. Image datasets were composed of lateral lumbar spine and whole spine X-rays, biplanar whole spine X-rays and lumbar spine magnetic resonance cross sections, and were increasing in size or enriched by augmentation techniques. Spinopelvic parameter measurement was approached either by landmark detection or structure segmentation, and U-Net was the most frequently applied DL architecture. The latest DL methods achieved excellent performance in terms of mean absolute error against reference manual measurements (~ 2° or ~ 1 mm). CONCLUSION Although the application of relatively complex DL architectures resulted in an improved measurement accuracy of sagittal spinopelvic parameters, future methods should focus on multi-institution and multi-observer analyses as well as uncertainty estimation and error handling implementations for integration into the clinical workflow. Further advances will enhance the predictive analytics of DL methods for spinopelvic parameter measurement. LEVEL OF EVIDENCE I Diagnostic: individual cross-sectional studies with the consistently applied reference standard and blinding.
Collapse
|
21
|
Galbusera F, Bassani T, Panico M, Sconfienza LM, Cina A. A fresh look at spinal alignment and deformities: Automated analysis of a large database of 9832 biplanar radiographs. Front Bioeng Biotechnol 2022; 10:863054. [PMID: 35910028 PMCID: PMC9335010 DOI: 10.3389/fbioe.2022.863054] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 06/27/2022] [Indexed: 11/13/2022] Open
Abstract
We developed and used a deep learning tool to process biplanar radiographs of 9,832 non-surgical patients suffering from spinal deformities, with the aim of reporting the statistical distribution of radiological parameters describing the spinal shape and the correlations and interdependencies between them. An existing tool able to automatically perform a three-dimensional reconstruction of the thoracolumbar spine has been improved and used to analyze a large set of biplanar radiographs of the trunk. For all patients, the following parameters were calculated: spinopelvic parameters; lumbar lordosis; mismatch between pelvic incidence and lumbar lordosis; thoracic kyphosis; maximal coronal Cobb angle; sagittal vertical axis; T1-pelvic angle; maximal vertebral rotation in the transverse plane. The radiological parameters describing the sagittal alignment were found to be highly interrelated with each other, as well as dependent on age, while sex had relatively minor but statistically significant importance. Lumbar lordosis was associated with thoracic kyphosis, pelvic incidence and sagittal vertical axis. The pelvic incidence-lumbar lordosis mismatch was found to be dependent on the pelvic incidence and on age. Scoliosis had a distinct association with the sagittal alignment in adolescent and adult subjects. The deep learning-based tool allowed for the analysis of a large imaging database which would not be reasonably feasible if performed by human operators. The large set of results will be valuable to trigger new research questions in the field of spinal deformities, as well as to challenge the current knowledge.
Collapse
Affiliation(s)
- Fabio Galbusera
- Spine Center, Schulthess Clinic, Zurich, Switzerland
- *Correspondence: Fabio Galbusera,
| | - Tito Bassani
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| | - Matteo Panico
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
- Department of Chemistry, Materials and Chemical Engineering “Giulio Natta”, Politecnico di Milano, Milan, Italy
| | - Luca Maria Sconfienza
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
- Department of Biomedical Sciences for Health, Università Degli Studi di Milano, Milan, Italy
| | - Andrea Cina
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| |
Collapse
|
22
|
Region-Based Convolutional Neural Network-Based Spine Model Positioning of X-Ray Images. BIOMED RESEARCH INTERNATIONAL 2022; 2022:7512445. [PMID: 35757487 PMCID: PMC9232328 DOI: 10.1155/2022/7512445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 05/11/2022] [Accepted: 05/31/2022] [Indexed: 11/17/2022]
Abstract
Background Idiopathic scoliosis accounts for over 80% of all cases of scoliosis but has an unclear pathogenic mechanism. Many studies have introduced conventional image processing methods, but the results often fail to meet expectations. With the improvement and evolution of research in neural networks in the field of deep learning, many research efforts related to spinal reconstruction using the convolutional neural network (CNN) architecture of deep learning have shown promise. Purpose To investigate the use of CNN for spine modeling. Methods The primary technique used in this study involves Mask Region-based CNN (R-CNN) image segmentation and object detection methods as applied to spine model positioning of radiographs. The methods were evaluated based on common evaluation criteria for vertebral segmentation and object detection. Evaluations were performed using the loss function, mask loss function, classification loss function, target box loss function, average accuracy, and average recall. Results Many bony structures were directly identified in one step, including the lumbar spine (L1-L5) and thoracic spine (T1-T12) in frontal and lateral radiographs, thereby achieving initial positioning of the statistical spine model to provide spine model positioning for future reconstruction and classification prediction. An average detection box accuracy of 97.4% and an average segmentation accuracy of 96.8% were achieved for the prediction efficacy of frontal images, with good image visualization. Moreover, the results for lateral images were satisfactory considering the evaluation parameters and image visualization. Conclusion Mask R-CNN can be used for effective positioning in spine model studies for future reconstruction and classification prediction.
Collapse
|
23
|
Shin H, Choi GS, Shon OJ, Kim GB, Chang MC. Development of convolutional neural network model for diagnosing meniscus tear using magnetic resonance image. BMC Musculoskelet Disord 2022; 23:510. [PMID: 35637451 PMCID: PMC9150332 DOI: 10.1186/s12891-022-05468-6] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 05/23/2022] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Deep learning (DL) is an advanced machine learning approach used in diverse areas, such as image analysis, bioinformatics, and natural language processing. A convolutional neural network (CNN) is a representative DL model that is advantageous for image recognition and classification. In this study, we aimed to develop a CNN to detect meniscal tears and classify tear types using coronal and sagittal magnetic resonance (MR) images of each patient. METHODS We retrospectively collected 599 cases (medial meniscus tear = 384, lateral meniscus tear = 167, and medial and lateral meniscus tear = 48) of knee MR images from patients with meniscal tears and 449 cases of knee MR images from patients without meniscal tears. To develop the DL model for evaluating the presence of meniscal tears, all the collected knee MR images of 1048 cases were used. To develop the DL model for evaluating the type of meniscal tear, 538 cases with meniscal tears (horizontal tear = 268, complex tear = 147, radial tear = 48, and longitudinal tear = 75) and 449 cases without meniscal tears were used. Additionally, a CNN algorithm was used. To measure the model's performance, 70% of the included data were randomly assigned to the training set, and the remaining 30% were assigned to the test set. RESULTS The area under the curves (AUCs) of our model were 0.889, 0.817, and 0.924 for medial meniscal tears, lateral meniscal tears, and medial and lateral meniscal tears, respectively. The AUCs of the horizontal, complex, radial, and longitudinal tears were 0.761, 0.850, 0.601, and 0.858, respectively. CONCLUSION Our study showed that the CNN model has the potential to be used in diagnosing the presence of meniscal tears and differentiating the types of meniscal tears.
Collapse
Affiliation(s)
- Hyunkwang Shin
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si, Republic of Korea
| | - Gyu Sang Choi
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si, Republic of Korea
| | - Oog-Jin Shon
- Department of Orthopedic Surgery, Yeungnam University College of Medicine, Yeungnam University, 317-1, Daemyungdong, Namku, Daegu, 42415, Republic of Korea
| | - Gi Beom Kim
- Department of Orthopedic Surgery, Yeungnam University College of Medicine, Yeungnam University, 317-1, Daemyungdong, Namku, Daegu, 42415, Republic of Korea.
| | - Min Cheol Chang
- Department of Physical Medicine and Rehabilitation, College of Medicine, Yeungnam University, 317-1, Daemyungdong, Namku, Daegu, 42415, Republic of Korea.
| |
Collapse
|
24
|
Ma Z, Zhang M, Liu J, Yang A, Li H, Wang J, Hua D, Li M. An Assisted Diagnosis Model for Cancer Patients Based on Federated Learning. Front Oncol 2022; 12:860532. [PMID: 35311106 PMCID: PMC8928102 DOI: 10.3389/fonc.2022.860532] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Accepted: 02/08/2022] [Indexed: 12/24/2022] Open
Abstract
Since the 20th century, cancer has been a growing threat to human health. Cancer is a malignant tumor with high clinical morbidity and mortality, and there is a high risk of recurrence after surgery. At the same time, the diagnosis of whether the cancer is in situ recurrence is crucial for further treatment of cancer patients. According to statistics, about 90% of cancer-related deaths are due to metastasis of primary tumor cells. Therefore, the study of the location of cancer recurrence and its influencing factors is of great significance for the clinical diagnosis and treatment of cancer. In this paper, we propose an assisted diagnosis model for cancer patients based on federated learning. In terms of data, the influencing factors of cancer recurrence and the special needs of data samples required by federated learning were comprehensively considered. Six first-level impact indicators were determined, and the historical case data of cancer patients were further collected. Based on the federated learning framework combined with convolutional neural network, various physical examination indicators of patients were taken as input. The recurrence time and recurrence location of patients were used as output to construct an auxiliary diagnostic model, and linear regression, support vector regression, Bayesling regression, gradient ascending tree and multilayer perceptrons neural network algorithm were used as comparison algorithms. CNN’s federated prediction model based on improved under the condition of the joint modeling and simulation on the five types of cancer data accuracy reached more than 90%, the accuracy is better than single modeling machine learning tree model and linear model and neural network, the results show that auxiliary diagnosis model based on the study of cancer patients in assisted the doctor in the diagnosis of patients, As well as effectively provide nutritional programs for patients and have application value in prolonging the life of patients, it has certain guiding significance in the field of medical cancer rehabilitation.
Collapse
Affiliation(s)
- Zezhong Ma
- Hebei Engineering Research Center for the Intelligentization of Iron Ore Optimization and Ironmaking Raw Materials Preparation Processes, North China University of Science and Technology, Tangshan, China.,Hebei Key Laboratory of Data Science and Application, North China University of Science and Technology, Tangshan, China.,The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, China.,College of Science, North China University of Science and Technology, Tangshan, China
| | - Meng Zhang
- The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, China.,Tangshan Intelligent Industry and Image Processing Technology Innovation Center, North China University of Science and Technology, Tangshan, China
| | - Jiajia Liu
- College of Science, North China University of Science and Technology, Tangshan, China
| | - Aimin Yang
- Hebei Engineering Research Center for the Intelligentization of Iron Ore Optimization and Ironmaking Raw Materials Preparation Processes, North China University of Science and Technology, Tangshan, China.,Hebei Key Laboratory of Data Science and Application, North China University of Science and Technology, Tangshan, China.,The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, China.,College of Science, North China University of Science and Technology, Tangshan, China.,Tangshan Intelligent Industry and Image Processing Technology Innovation Center, North China University of Science and Technology, Tangshan, China
| | - Hao Li
- The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, China.,Tangshan Intelligent Industry and Image Processing Technology Innovation Center, North China University of Science and Technology, Tangshan, China
| | - Jian Wang
- The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, China.,Tangshan Intelligent Industry and Image Processing Technology Innovation Center, North China University of Science and Technology, Tangshan, China
| | - Dianbo Hua
- Beijing Sitairui Cancer Data Analysis Joint Laboratory, Beijing, China
| | - Mingduo Li
- State Key Laboratory of Process Automation in Mining and Metallurgy, Beijing, China.,Beijing Key Laboratory of Process Automation in Mining and Metallurgy, Beijing, China
| |
Collapse
|
25
|
Bayat A, Pace DF, Sekuboyina A, Payer C, Stern D, Urschler M, Kirschke JS, Menze BH. Anatomy-Aware Inference of the 3D Standing Spine Posture from 2D Radiographs. Tomography 2022; 8:479-496. [PMID: 35202204 PMCID: PMC8879677 DOI: 10.3390/tomography8010039] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 01/30/2022] [Accepted: 02/03/2022] [Indexed: 11/21/2022] Open
Abstract
An important factor for the development of spinal degeneration, pain and the outcome of spinal surgery is known to be the balance of the spine. It must be analyzed in an upright, standing position to ensure physiological loading conditions and visualize load-dependent deformations. Despite the complex 3D shape of the spine, this analysis is currently performed using 2D radiographs, as all frequently used 3D imaging techniques require the patient to be scanned in a prone position. To overcome this limitation, we propose a deep neural network to reconstruct the 3D spinal pose in an upright standing position, loaded naturally. Specifically, we propose a novel neural network architecture, which takes orthogonal 2D radiographs and infers the spine’s 3D posture using vertebral shape priors. In this work, we define vertebral shape priors using an atlas and a spine shape prior, incorporating both into our proposed network architecture. We validate our architecture on digitally reconstructed radiographs, achieving a 3D reconstruction Dice of 0.95, indicating an almost perfect 2D-to-3D domain translation. Validating the reconstruction accuracy of a 3D standing spine on real data is infeasible due to the lack of a valid ground truth. Hence, we design a novel experiment for this purpose, using an orientation invariant distance metric, to evaluate our model’s ability to synthesize full-3D, upright, and patient-specific spine models. We compare the synthesized spine shapes from clinical upright standing radiographs to the same patient’s 3D spinal posture in the prone position from CT.
Collapse
Affiliation(s)
- Amirhossein Bayat
- Department of Computer Science, Technical University of Munich, 85748 Garching, Germany; (A.S.); (B.H.M.)
- Department of Neuroradiology, Klinikum rech der Isar, 81675 Munich, Germany;
- Correspondence:
| | - Danielle F. Pace
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA;
- A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
| | - Anjany Sekuboyina
- Department of Computer Science, Technical University of Munich, 85748 Garching, Germany; (A.S.); (B.H.M.)
- Department of Neuroradiology, Klinikum rech der Isar, 81675 Munich, Germany;
- Department of Quantitative Biomedicine, University of Zurich, 8006 Zurich, Switzerland
| | - Christian Payer
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; (C.P.); (D.S.)
| | - Darko Stern
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; (C.P.); (D.S.)
| | - Martin Urschler
- School of Computer Science, University of Auckland, Auckland 1010, New Zealand;
| | - Jan S. Kirschke
- Department of Neuroradiology, Klinikum rech der Isar, 81675 Munich, Germany;
| | - Bjoern H. Menze
- Department of Computer Science, Technical University of Munich, 85748 Garching, Germany; (A.S.); (B.H.M.)
- Department of Quantitative Biomedicine, University of Zurich, 8006 Zurich, Switzerland
| |
Collapse
|
26
|
Nguyen TP, Jung JW, Yoo YJ, Choi SH, Yoon J. Intelligent Evaluation of Global Spinal Alignment by a Decentralized Convolutional Neural Network. J Digit Imaging 2022; 35:213-225. [PMID: 35064369 PMCID: PMC8921409 DOI: 10.1007/s10278-021-00533-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 09/30/2021] [Accepted: 10/31/2021] [Indexed: 01/12/2023] Open
Abstract
Degenerative changes of the spine can cause spinal misalignment, with part of the spine arching beyond normal limits or moving in an incorrect direction, potentially resulting in back pain and significantly limiting a person’s mobility. The most important parameters related to spinal misalignment include pelvic incidence, pelvic tilt, lumbar lordosis, thoracic kyphosis, and cervical lordosis. As a general rule, alignment of the spine for diagnosis and surgical treatment is estimated based on geometrical parameters measured manually by experienced doctors. However, these measurements consume the time and effort of experts to perform repetitive tasks that could be automated, especially with the powerful support of current artificial intelligence techniques. This paper focuses on creation of a decentralized convolutional neural network to precisely measure 12 spinal alignment parameters. Specifically, this method is based on detecting regions of interest with its dimensions that decrease by three orders of magnitude to focus on the necessary region to provide the output as key points. Using these key points, parameters representing spinal alignment are calculated. The quality of the method’s performance, which is the consistency of the measurement results with manual measurement, is validated by 30 test cases and shows 10 of 12 parameters with a correlation coefficient > 0.8, with pelvic tilt having the smallest absolute deviation of 1.156°.
Collapse
Affiliation(s)
- Thong Phi Nguyen
- Department of Mechanical Engineering, BK21 FOUR ERICA-ACE Centre, Hanyang University, 55 Hanyangdaehak-ro, Sangnok-gu, Ansan, Gyeonggi, 15588, Republic of Korea
| | - Ji Won Jung
- Department of Orthopaedic Surgery, Hanyang University College of Medicine, 222 Wangsimni-ro, Seongdong-gu, Seoul, 04763, Republic of Korea
| | - Yong Jin Yoo
- Department of Orthopaedic Surgery, Hanyang University College of Medicine, 222 Wangsimni-ro, Seongdong-gu, Seoul, 04763, Republic of Korea
| | - Sung Hoon Choi
- Department of Orthopaedic Surgery, Hanyang University College of Medicine, 222 Wangsimni-ro, Seongdong-gu, Seoul, 04763, Republic of Korea.
| | - Jonghun Yoon
- Department of Mechanical Engineering, Hanyang University, 55, Hanyangdaehak-ro, Sangnok-gu, Gyeonggi-do, Ansan-si, 15588, Republic of Korea.
| |
Collapse
|
27
|
Overbergh T, Severijns P, Beaucage-Gauvreau E, Ackermans T, Moke L, Jonkers I, Scheys L. Subject-Specific Spino-Pelvic Models Reliably Measure Spinal Kinematics During Seated Forward Bending in Adult Spinal Deformity. Front Bioeng Biotechnol 2021; 9:720060. [PMID: 34540815 PMCID: PMC8440831 DOI: 10.3389/fbioe.2021.720060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 08/17/2021] [Indexed: 11/13/2022] Open
Abstract
Image-based subject-specific models and simulations are recently being introduced to complement current state-of-the-art mostly static insights of the adult spinal deformity (ASD) pathology and improve the often poor surgical outcomes. Although the accuracy of a recently developed subject-specific modeling and simulation framework has already been quantified, its reliability to perform marker-driven kinematic analyses has not yet been investigated. The aim of this work was to evaluate the reliability of this subject-specific framework to measure spine kinematics in ASD patients, in terms of 1) the overall test-retest repeatability; 2) the inter-operator agreement of spine kinematic estimates; and, 3) the uncertainty of those spine kinematics to operator-dependent parameters of the framework. To evaluate the overall repeatability 1], four ASD subjects and one control subject participated in a test-retest study with a 2-week interval. At both time instances, subject-specific spino-pelvic models were created by one operator to simulate a recorded forward trunk flexion motion. Next, to evaluate inter-operator agreement 2], three trained operators each created a model for three ASD subjects to simulate the same forward trunk flexion motion. Intraclass correlation coefficients (ICC's) of the range of motion (ROM) of conventional spino-pelvic parameters [lumbar lordosis (LL), sagittal vertical axis (SVA), thoracic kyphosis (TK), pelvic tilt (PT), T1-and T9-spino-pelvic inclination (T1/T9-SPI)] were used to evaluate kinematic reliability 1] and inter-operator agreement 2]. Lastly, a Monte-Carlo probabilistic simulation was used to evaluate the uncertainty of the intervertebral joint kinematics to operator variability in the framework, for three ASD subjects 3]. LL, SVA, and T1/T9-SPI had an excellent test-retest reliability for the ROM, while TK and PT did not. Inter-operator agreement was excellent, with ICC values higher than test-retest reliability. These results indicate that operator-induced uncertainty has a limited impact on kinematic simulations of spine flexion, while test-retest reliability has a much higher variability. The definition of the intervertebral joints in the framework was identified as the most sensitive operator-dependent parameter. Nevertheless, intervertebral joint estimations had small mean 90% confidence intervals (1.04°-1.75°). This work will contribute to understanding the limitations of kinematic simulations in ASD patients, thus leading to a better evaluation of future hypotheses.
Collapse
Affiliation(s)
- Thomas Overbergh
- Department of Development and Regeneration, Faculty of Medicine, Institute for Orthopaedic Research and Training (IORT), KU Leuven, Leuven, Belgium
| | - Pieter Severijns
- Department of Development and Regeneration, Faculty of Medicine, Institute for Orthopaedic Research and Training (IORT), KU Leuven, Leuven, Belgium
| | - Erica Beaucage-Gauvreau
- Department of Development and Regeneration, Faculty of Medicine, Institute for Orthopaedic Research and Training (IORT), KU Leuven, Leuven, Belgium
| | - Thijs Ackermans
- Department of Development and Regeneration, Faculty of Medicine, Institute for Orthopaedic Research and Training (IORT), KU Leuven, Leuven, Belgium
| | - Lieven Moke
- Department of Development and Regeneration, Faculty of Medicine, Institute for Orthopaedic Research and Training (IORT), KU Leuven, Leuven, Belgium.,Division of Orthopaedics, University Hospitals Leuven, Leuven, Belgium
| | - Ilse Jonkers
- Department of Movement Sciences, Human Movement Biomechanics Research Group, KU Leuven, Leuven, Belgium
| | - Lennart Scheys
- Department of Development and Regeneration, Faculty of Medicine, Institute for Orthopaedic Research and Training (IORT), KU Leuven, Leuven, Belgium.,Division of Orthopaedics, University Hospitals Leuven, Leuven, Belgium
| |
Collapse
|
28
|
Shiode R, Kabashima M, Hiasa Y, Oka K, Murase T, Sato Y, Otake Y. 2D-3D reconstruction of distal forearm bone from actual X-ray images of the wrist using convolutional neural networks. Sci Rep 2021; 11:15249. [PMID: 34315946 PMCID: PMC8316567 DOI: 10.1038/s41598-021-94634-2] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 05/06/2021] [Indexed: 01/08/2023] Open
Abstract
The purpose of the study was to develop a deep learning network for estimating and constructing highly accurate 3D bone models directly from actual X-ray images and to verify its accuracy. The data used were 173 computed tomography (CT) images and 105 actual X-ray images of a healthy wrist joint. To compensate for the small size of the dataset, digitally reconstructed radiography (DRR) images generated from CT were used as training data instead of actual X-ray images. The DRR-like images were generated from actual X-ray images in the test and adapted to the network, and high-accuracy estimation of a 3D bone model from a small data set was possible. The 3D shape of the radius and ulna were estimated from actual X-ray images with accuracies of 1.05 ± 0.36 and 1.45 ± 0.41 mm, respectively.
Collapse
Affiliation(s)
- Ryoya Shiode
- Department of Orthopaedic Surgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka, 565-0871, Japan. .,Division of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara, 630-0192, Japan.
| | - Mototaka Kabashima
- Division of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara, 630-0192, Japan
| | - Yuta Hiasa
- Division of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara, 630-0192, Japan
| | - Kunihiro Oka
- Department of Orthopaedic Surgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Tsuyoshi Murase
- Department of Orthopaedic Surgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Yoshinobu Sato
- Division of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara, 630-0192, Japan
| | - Yoshito Otake
- Division of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara, 630-0192, Japan.
| |
Collapse
|
29
|
Li X, Wang S, Niu X, Wang L, Chen P. 3D M-Net: Object-Specific 3D Segmentation Network Based on a Single Projection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:5852595. [PMID: 34335721 PMCID: PMC8292052 DOI: 10.1155/2021/5852595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 06/28/2021] [Indexed: 11/30/2022]
Abstract
The internal assembly correctness of industrial products directly affects their performance and service life. Industrial products are usually protected by opaque housing, so most internal detection methods are based on X-rays. Since the dense structural features of industrial products, it is challenging to detect the occluded parts only from projections. Limited by the data acquisition and reconstruction speeds, CT-based detection methods do not achieve real-time detection. To solve the above problems, we design an end-to-end single-projection 3D segmentation network. For a specific product, the network adopts a single projection as input to segment product components and output 3D segmentation results. In this study, the feasibility of the network was verified against data containing several typical assembly errors. The qualitative and quantitative results reveal that the segmentation results can meet industrial assembly real-time detection requirements and exhibit high robustness to noise and component occlusion.
Collapse
Affiliation(s)
- Xuan Li
- State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Sukai Wang
- State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Xiaodong Niu
- State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Liming Wang
- State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Ping Chen
- State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| |
Collapse
|
30
|
Cina A, Bassani T, Panico M, Luca A, Masharawi Y, Brayda-Bruno M, Galbusera F. 2-step deep learning model for landmarks localization in spine radiographs. Sci Rep 2021; 11:9482. [PMID: 33947917 PMCID: PMC8096829 DOI: 10.1038/s41598-021-89102-w] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 04/20/2021] [Indexed: 11/25/2022] Open
Abstract
In this work we propose to use Deep Learning to automatically calculate the coordinates of the vertebral corners in sagittal x-rays images of the thoracolumbar spine and, from those landmarks, to calculate relevant radiological parameters such as L1–L5 and L1–S1 lordosis and sacral slope. For this purpose, we used 10,193 images annotated with the landmarks coordinates as the ground truth. We realized a model that consists of 2 steps. In step 1, we trained 2 Convolutional Neural Networks to identify each vertebra in the image and calculate the landmarks coordinates respectively. In step 2, we refined the localization using cropped images of a single vertebra as input to another convolutional neural network and we used geometrical transformations to map the corners to the original image. For the localization tasks, we used a differentiable spatial to numerical transform (DSNT) as the top layer. We evaluated the model both qualitatively and quantitatively on a set of 195 test images. The median localization errors relative to the vertebrae dimensions were 1.98% and 1.68% for x and y coordinates respectively. All the predicted angles were highly correlated with the ground truth, despite non-negligible absolute median errors of 1.84°, 2.43° and 1.98° for L1–L5, L1–S1 and SS respectively. Our model is able to calculate with good accuracy the coordinates of the vertebral corners and has a large potential for improving the reliability and repeatability of measurements in clinical tasks.
Collapse
Affiliation(s)
- Andrea Cina
- IRCCS Istituto Ortopedico Galeazzi, Via Riccardo Galeazzi 4, 20161, Milan, Italy.
| | - Tito Bassani
- IRCCS Istituto Ortopedico Galeazzi, Via Riccardo Galeazzi 4, 20161, Milan, Italy
| | - Matteo Panico
- IRCCS Istituto Ortopedico Galeazzi, Via Riccardo Galeazzi 4, 20161, Milan, Italy
| | - Andrea Luca
- Department of Spine Surgery III, IRCCS Istituto Ortopedico Galeazzi, Via Riccardo Galeazzi 4, 20161, Milan, Italy
| | - Youssef Masharawi
- Department of Physiotherapy, Sackler Faculty of Medicine, The Stanley Steyer School of Health Professions, Tel Aviv University, Tel Aviv, Israel
| | - Marco Brayda-Bruno
- Department of Spine Surgery III, IRCCS Istituto Ortopedico Galeazzi, Via Riccardo Galeazzi 4, 20161, Milan, Italy
| | - Fabio Galbusera
- IRCCS Istituto Ortopedico Galeazzi, Via Riccardo Galeazzi 4, 20161, Milan, Italy
| |
Collapse
|
31
|
Vergari C, Skalli W, Abelin-Genevois K, Bernard JC, Hu Z, Cheng JCY, Chu WCW, Assi A, Karam M, Ghanem I, Bassani T, Galbusera F, Sconfienza LM, Brayda-Bruno M, Courtois I, Ebermeyer E, Vialle R, Langlais T, Dubousset J. Effect of curve location on the severity index for adolescent idiopathic scoliosis: a longitudinal cohort study. Eur Radiol 2021; 31:8488-8497. [PMID: 33884474 DOI: 10.1007/s00330-021-07944-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 02/08/2021] [Accepted: 03/25/2021] [Indexed: 01/04/2023]
Abstract
OBJECTIVES Adolescent idiopathic scoliosis (AIS) is the most common spinal disorder in children. A severity index was recently proposed to identify the stable from the progressive scoliosis at the first standardized biplanar radiographic exam. The aim of this work was to extend the validation of the severity index and to determine if curve location influences its predictive capabilities. METHODS AIS patients with Cobb angle between 10° and 25°, Risser 0-2, and no previous treatment were included. They underwent standing biplanar radiography and 3D reconstruction of the spine and pelvis, which allowed to calculate their severity index. Patients were grouped by curve location (thoracic, thoracolumbar, lumbar). Patients were followed up until skeletal maturity (Risser ≥ 3) or brace prescription. Their outcome was compared to the prediction made by the severity index. RESULTS In total, 205 AIS patients were included; 82% of them (155/189, 95% confidence interval [74-90%]) were correctly classified by the index, while 16 patients were unclassified. Positive predictive ratio was 78% and negative predictive ratio was 86%. Specificity (78%) was not significantly affected by curve location, while patients with thoracic and lumbar curves showed higher sensitivity (≥ 89%) than those with thoracolumbar curves (74%). CONCLUSIONS In this multicentric cohort of 205 patients, the severity index was used to predict the risk of progression from mild to moderate scoliosis, with similar results of typical major curve types. This index represents a novel tool to aid the clinician and the patient in the modulation of the follow-up and, for progressive patients, their decision for brace treatment. KEY POINTS • The severity index of adolescent idiopathic scoliosis has the potential to detect patients with progressive scoliosis as early as the first exam. • Out of 205 patients, 82% were correctly classified as either stable or progressive by the severity index. • The location of the main curve had small effect on the predictive capability of the index.
Collapse
Affiliation(s)
- Claudio Vergari
- Arts et Métiers Institute of Technology, Université Sorbonne Paris Nord, IBHGC - Institut de Biomécanique Humaine Georges Charpak, HESAM Université, 151 bd de l'Hôpital, F-75013, Paris, France.
| | - Wafa Skalli
- Arts et Métiers Institute of Technology, Université Sorbonne Paris Nord, IBHGC - Institut de Biomécanique Humaine Georges Charpak, HESAM Université, 151 bd de l'Hôpital, F-75013, Paris, France
| | - Kariman Abelin-Genevois
- Department of Orthopaedic Surgery and Children Conservative Treatment, Croix-Rouge française, Centre Médico-Chirurgical et de Réadaptation des Massues, Lyon, France
| | - Jean Claude Bernard
- Department of Orthopaedic Surgery and Children Conservative Treatment, Croix-Rouge française, Centre Médico-Chirurgical et de Réadaptation des Massues, Lyon, France
| | - Zongshan Hu
- SH Ho Scoliosis Research Laboratory, Department of Orthopaedics and Traumatology, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Jack Chun Yiu Cheng
- SH Ho Scoliosis Research Laboratory, Department of Orthopaedics and Traumatology, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Winnie Chiu Wing Chu
- Department of Imaging and Interventional Radiology, Faculty of Medicine, The Prince of Wales Hospital, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Ayman Assi
- Laboratory of Biomechanics and Medical Imaging, Faculty of Medicine, University of Saint-Joseph, Beirut, Lebanon
| | - Mohammad Karam
- Laboratory of Biomechanics and Medical Imaging, Faculty of Medicine, University of Saint-Joseph, Beirut, Lebanon
| | - Ismat Ghanem
- Laboratory of Biomechanics and Medical Imaging, Faculty of Medicine, University of Saint-Joseph, Beirut, Lebanon
| | - Tito Bassani
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| | | | - Luca Maria Sconfienza
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy.,Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Milan, Italy
| | | | | | - Eric Ebermeyer
- Unite Rachis, CHU - Hopital Bellevue, Saint-Etienne, France
| | - Raphael Vialle
- Sorbonne Université, Department of Pediatric Orthopaedics, Hôpital Armand Trousseau, Assistance Publique-Hôpitaux de Paris (AP-HP), Paris, France
| | - Tristan Langlais
- Sorbonne Université, Department of Pediatric Orthopaedics, Hôpital Armand Trousseau, Assistance Publique-Hôpitaux de Paris (AP-HP), Paris, France
| | - Jean Dubousset
- Arts et Métiers Institute of Technology, Université Sorbonne Paris Nord, IBHGC - Institut de Biomécanique Humaine Georges Charpak, HESAM Université, 151 bd de l'Hôpital, F-75013, Paris, France
| |
Collapse
|
32
|
Nguyen TP, Chae DS, Park SJ, Kang KY, Yoon J. Deep learning system for Meyerding classification and segmental motion measurement in diagnosis of lumbar spondylolisthesis. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102371] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
33
|
Chae DS, Nguyen TP, Park SJ, Kang KY, Won C, Yoon J. Decentralized convolutional neural network for evaluating spinal deformity with spinopelvic parameters. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105699. [PMID: 32805697 DOI: 10.1016/j.cmpb.2020.105699] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Accepted: 07/24/2020] [Indexed: 06/11/2023]
Abstract
Low back pain which is caused by the abnormal spinal alignment is one of the most common musculoskeletal symptom and, consequently, is the reason for not only reduction of productivity but also personal suffering. In clinical diagnosis for this disease, estimating adult spinal deformity is required as an indispensable procedure in highlighting abnormal values to output timely warnings and providing precise geometry dimensions for therapeutic therapies. This paper presents an automated method for precisely measuring spinopelvic parameters using a decentralized convolutional neural network as an efficient replacement for current manual process which not only requires experienced surgeons but also shows limitation in ability to process large numbers of images to accommodate the explosion of big data technologies. The proposed method is based on gradually narrowing the regions of interest (ROIs) for feature extraction and leads the model to mainly focus on the necessary geometry characteristics represented as keypoints. According to keypoints obtained, parameters representing the spinal deformity are calculated, which consistency with manual measurement was validated by 40 test cases and, potentially, provided 1.45o mean absolute values of deviation for PTA as the minimum and 3.51o in case of LSA as maximum.
Collapse
Affiliation(s)
- Dong-Sik Chae
- Department of Orthopaedic Surgery, International St. Mary's Hospital, Catholic Kwandong University College of Medicine, Incheon, Republic of Korea
| | - Thong Phi Nguyen
- Department of Mechanical Design Engineering, Hanyang University, 222, Wangsimni-ro, Seongdongsu, Seoul 04763, Republic of Korea
| | - Sung-Jun Park
- Department of Mechanical Engineering, Korea National University of Transportation, 50 Daehak-ro, Chungju, Chungcheongbuk-do 380-702, Republic of Korea
| | - Kyung-Yil Kang
- Department of Medicine, Catholic Kwandong Graduate School, 24, Beomil-ro, 579 Beon-gil, Gangneung-si, Gangwon-do, 25601, Republic of Korea
| | - Chanhee Won
- Department of Mechanical Design Engineering, Hanyang University, 222, Wangsimni-ro, Seongdongsu, Seoul 04763, Republic of Korea
| | - Jonghun Yoon
- Department of Mechanical Engineering, Hanyang University, 55, Hanyangdaehak-ro, Sangnok-gu, Ansan-si, Gyeonggi-do, 15588, Republic of Korea.
| |
Collapse
|
34
|
Development and validation of a modeling workflow for the generation of image-based, subject-specific thoracolumbar models of spinal deformity. J Biomech 2020; 110:109946. [DOI: 10.1016/j.jbiomech.2020.109946] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Revised: 07/07/2020] [Accepted: 07/08/2020] [Indexed: 11/24/2022]
|
35
|
Vergari C, Skalli W, Gajny L. A convolutional neural network to detect scoliosis treatment in radiographs. Int J Comput Assist Radiol Surg 2020; 15:1069-1074. [PMID: 32337647 DOI: 10.1007/s11548-020-02173-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Accepted: 04/16/2020] [Indexed: 02/08/2023]
Abstract
PURPOSE The aim of this work is to propose a classification algorithm to automatically detect treatment for scoliosis (brace, implant or no treatment) in postero-anterior radiographs. Such automatic labelling of radiographs could represent a step towards global automatic radiological analysis. METHODS Seven hundred and ninety-six frontal radiographies of adolescents were collected (84 patients wearing a brace, 325 with a spinal implant and 387 reference images with no treatment). The dataset was augmented to a total of 2096 images. A classification model was built, composed by a forward convolutional neural network (CNN) followed by a discriminant analysis; the output was a probability for a given image to contain a brace, a spinal implant or none. The model was validated with a stratified tenfold cross-validation procedure. Performance was estimated by calculating the average accuracy. RESULTS 98.3% of the radiographs were correctly classified as either reference, brace or implant, excluding 2.0% unclassified images. 99.7% of brace radiographs were correctly detected, while most of the errors occurred in the reference group (i.e. 2.1% of reference images were wrongly classified). CONCLUSION The proposed classification model, the originality of which is the coupling of a CNN with discriminant analysis, can be used to automatically label radiographs for the presence of scoliosis treatment. This information is usually missing from DICOM metadata, so such method could facilitate the use of large databases. Furthermore, the same model architecture could potentially be applied for other radiograph classifications, such as sex and presence of scoliotic deformity.
Collapse
Affiliation(s)
- Claudio Vergari
- Arts et Métiers, Institut de Biomécanique Humaine Georges Charpak, 151 bd de l'Hôpital, 75013, Paris, France
| | - Wafa Skalli
- Arts et Métiers, Institut de Biomécanique Humaine Georges Charpak, 151 bd de l'Hôpital, 75013, Paris, France
| | - Laurent Gajny
- Arts et Métiers, Institut de Biomécanique Humaine Georges Charpak, 151 bd de l'Hôpital, 75013, Paris, France.
| |
Collapse
|
36
|
Galbusera F, Niemeyer F, Bassani T, Sconfienza LM, Wilke HJ. Estimating the three-dimensional vertebral orientation from a planar radiograph: Is it feasible? J Biomech 2020; 102:109328. [DOI: 10.1016/j.jbiomech.2019.109328] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 08/09/2019] [Accepted: 08/30/2019] [Indexed: 10/26/2022]
|