1
|
Singh Rana SS, Ghahremani JS, Woo JJ, Navarro RA, Ramkumar PN. A Glossary of Terms in Artificial Intelligence for Healthcare. Arthroscopy 2025; 41:516-531. [PMID: 39414094 DOI: 10.1016/j.arthro.2024.08.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 08/06/2024] [Accepted: 08/07/2024] [Indexed: 10/18/2024]
Abstract
In recent decades, artificial intelligence (AI) has infiltrated a variety of domains, including media, education, and medicine. There exists no glossary, lexicon, or reference for the uninitiated medical professional to explore the new terminology. As AI-driven technologies and applications become more available for clinical use in healthcare settings, an understanding of basic components, models, and tasks related to AI is crucial for clinical and academic appraisal. Here, we present a glossary of AI definitions that healthcare professionals can utilize to augment personal understanding of AI during this fourth industrial revolution. LEVEL OF EVIDENCE: Level V, expert opinion.
Collapse
Affiliation(s)
- S Shamtej Singh Rana
- The Kaiser Permanente Bernard J. Tyson School of Medicine, Pasadena, California, U.S.A
| | - Jacob S Ghahremani
- The Kaiser Permanente Bernard J. Tyson School of Medicine, Pasadena, California, U.S.A
| | - Joshua J Woo
- The Warren Alpert Medical School of Brown University, Providence, Rhode Island, U.S.A
| | - Ronald A Navarro
- The Kaiser Permanente Bernard J. Tyson School of Medicine, Pasadena, California, U.S.A.; Kaiser Permanente South Bay Medical Center, Harbor City, California, U.S.A
| | | |
Collapse
|
2
|
Fujita A, Goto K, Ueda A, Kuroda Y, Kawai T, Okuzu Y, Okuno Y, Matsuda S. Measurement of the Acetabular Cup Orientation After Total Hip Arthroplasty Based on 3-Dimensional Reconstruction From a Single X-Ray Image Using Generative Adversarial Networks. J Arthroplasty 2025; 40:136-143.e1. [PMID: 38944061 DOI: 10.1016/j.arth.2024.06.059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 06/20/2024] [Accepted: 06/24/2024] [Indexed: 07/01/2024] Open
Abstract
BACKGROUND The purpose of this study was to reconstruct 3-dimensional (3D) computed tomography (CT) images from single anteroposterior (AP) postoperative total hip arthroplasty (THA) X-ray images using a deep learning algorithm known as generative adversarial networks (GANs) and to validate the accuracy of cup angle measurement on GAN-generated CT. METHODS We used 2 GAN-based models, CycleGAN and X2CT-GAN, to generate 3D CT images from X-ray images of 386 patients who underwent primary THAs using a cementless cup. The training dataset consisted of 522 CT images and 2,282 X-ray images. The image quality was validated using the peak signal-to-noise ratio and the structural similarity index measure. The cup anteversion and inclination measurements on the GAN-generated CT images were compared with the actual CT measurements. Statistical analyses of absolute measurement errors were performed using Mann-Whitney U tests and nonlinear regression analyses. RESULTS The study successfully achieved 3D reconstruction from single AP postoperative THA X-ray images using GANs, exhibiting excellent peak signal-to-noise ratio (37.40) and structural similarity index measure (0.74). The median absolute difference in radiographic anteversion was 3.45° and the median absolute difference in radiographic inclination was 3.25°, respectively. Absolute measurement errors tended to be larger in cases with cup malposition than in those with optimal cup orientation. CONCLUSIONS This study demonstrates the potential of GANs for 3D reconstruction from single AP postoperative THA X-ray images to evaluate cup orientation. Further investigation and refinement of this model are required to improve its performance.
Collapse
Affiliation(s)
- Akira Fujita
- Department of Orthopaedic Surgery, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan; Department of Biomedical Data Intelligence, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan.
| | - Koji Goto
- Department of Orthopaedic Surgery, Kindai University Hospital, Osaka, Japan
| | - Akihiko Ueda
- Department of Biomedical Data Intelligence, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Yutaka Kuroda
- Department of Orthopaedic Surgery, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Toshiyuki Kawai
- Department of Orthopaedic Surgery, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Yaichiro Okuzu
- Department of Orthopaedic Surgery, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Yasushi Okuno
- Department of Biomedical Data Intelligence, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Shuichi Matsuda
- Department of Orthopaedic Surgery, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| |
Collapse
|
3
|
El Kojok Z, Al Khansa H, Trad F, Chehab A. Augmenting a spine CT scans dataset using VAEs, GANs, and transfer learning for improved detection of vertebral compression fractures. Comput Biol Med 2025; 184:109446. [PMID: 39550911 DOI: 10.1016/j.compbiomed.2024.109446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Revised: 10/26/2024] [Accepted: 11/13/2024] [Indexed: 11/19/2024]
Abstract
In recent years, deep learning has become a popular tool to analyze and classify medical images. However, challenges such as limited data availability, high labeling costs, and privacy concerns remain significant obstacles. As such, generative models have been extensively explored as a solution to generate new images and overcome the stated challenges. In this paper, we augment a dataset of chest CT scans for Vertebral Compression Fractures (VCFs) collected from the American University of Beirut Medical Center (AUBMC), specifically targeting the detection of incidental fractures that are often overlooked in routine chest CTs, as these scans are not typically focused on spinal analysis. Our goal is to enhance AI systems to enable automated early detection of such incidental fractures, addressing a critical healthcare gap and leading to improved patient outcomes by catching fractures that might otherwise go undiagnosed. We first generate a synthetic dataset based on the segmented CTSpine1K dataset to simulate real grayscale data that aligns with our specific scenario. Then, we use this generated data to evaluate the generative capabilities of Deep Convolutional Generative Adverserial Networks (DCGANs), variational autoencoders (VAEs), and VAE-GAN models. The VAE-GAN model demonstrated the highest performance, achieving a Fréchet Inception Distance (FID) five times lower than the other architectures. To adapt this model to real-image scenarios, we perform transfer learning on the GAN, training it with the real dataset collected from AUBMC and generating additional samples. Finally, we train a CNN using augmented datasets that include both real and generated synthetic data and compare its performance to training on real data alone. We then evaluate the model exclusively on a test set composed of real images to assess the effect of the generated data on real-world performance. We find that training on augmented datasets significantly improves the classification accuracy on a test set composed of real images by 16 %, increasing it from 73 % to 89 %. This improvement demonstrates that the generated data is of high quality and enhances the model's ability to perform well against unseen, real data.
Collapse
Affiliation(s)
- Zeina El Kojok
- Electrical and Computer Engineering, American University of Beirut, Beirut, Lebanon
| | - Hadi Al Khansa
- Electrical and Computer Engineering, American University of Beirut, Beirut, Lebanon
| | - Fouad Trad
- Electrical and Computer Engineering, American University of Beirut, Beirut, Lebanon.
| | - Ali Chehab
- Electrical and Computer Engineering, American University of Beirut, Beirut, Lebanon
| |
Collapse
|
4
|
Chen Y, Gao Y, Fu X, Chen Y, Wu J, Guo C, Li X. Automatic 3D reconstruction of vertebrae from orthogonal bi-planar radiographs. Sci Rep 2024; 14:16165. [PMID: 39003269 PMCID: PMC11246511 DOI: 10.1038/s41598-024-65795-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 06/24/2024] [Indexed: 07/15/2024] Open
Abstract
When conducting spine-related diagnosis and surgery, the three-dimensional (3D) upright posture of the spine under natural weight bearing is of significant clinical value for physicians to analyze the force on the spine. However, existing medical imaging technologies cannot meet current requirements of medical service. On the one hand, the mainstream 3D volumetric imaging modalities (e.g. CT and MRI) require patients to lie down during the imaging process. On the other hand, the imaging modalities conducted in an upright posture (e.g. radiograph) can only realize 2D projections, which lose the valid information of spinal anatomy and curvature. Developments of deep learning-based 3D reconstruction methods bring potential to overcome the limitations of the existing medical imaging technologies. To deal with the limitations of current medical imaging technologies as is described above, in this paper, we propose a novel deep learning framework, ReVerteR, which can realize automatic 3D Reconstruction of Vertebrae from orthogonal bi-planar Radiographs. With the utilization of self-attention mechanism and specially designed loss function combining Dice, Hausdorff, Focal, and MSE, ReVerteR can alleviate the sample-imbalance problem during the reconstruction process and realize the fusion of the centroid annotation and the focused vertebra. Furthermore, aiming at automatic and customized 3D spinal reconstruction in real-world scenarios, we extend ReVerteR to a clinical deployment-oriented framework, and develop an interactive interface with all functions in the framework integrated so as to enhance human-computer interaction during clinical decision-making. Extensive experiments and visualization conducted on our constructed datasets based on two benchmark datasets of spinal CT, VerSe 2019 and VerSe 2020, demonstrate the effectiveness of our proposed ReVerteR. In this paper, we propose an automatic 3D reconstruction method of vertebrae based on orthogonal bi-planar radiographs. With the 3D upright posture of the spine under natural weight bearing effectively constructed, our proposed method is expected to better support doctors make clinical decision during spine-related diagnosis and surgery.
Collapse
Affiliation(s)
- Yuepeng Chen
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China
- Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China
- Institute for Intelligent Healthcare, Tsinghua University, Beijing, 100084, China
| | - Yue Gao
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China
- Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China
| | - Xiangling Fu
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China.
- Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China.
| | - Yingyin Chen
- Guangdong Provincial Key Laboratory of Tumor Interventional Diagnosis and Treatment, Zhuhai People's Hospital, Zhuhai, 519000, China
| | - Ji Wu
- Institute for Intelligent Healthcare, Tsinghua University, Beijing, 100084, China.
- Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China.
- College of AI, Tsinghua University, Beijing, 100084, China.
| | - Chenyi Guo
- Institute for Intelligent Healthcare, Tsinghua University, Beijing, 100084, China.
- Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China.
| | - Xiaodong Li
- Department of Spine and Osteology, Zhuhai People's Hospital, Zhuhai, 519000, China.
| |
Collapse
|
5
|
Xing X, Li X, Wei C, Zhang Z, Liu O, Xie S, Chen H, Quan S, Wang C, Yang X, Jiang X, Shuai J. DP-GAN+B: A lightweight generative adversarial network based on depthwise separable convolutions for generating CT volumes. Comput Biol Med 2024; 174:108393. [PMID: 38582001 DOI: 10.1016/j.compbiomed.2024.108393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 03/17/2024] [Accepted: 04/01/2024] [Indexed: 04/08/2024]
Abstract
X-rays, commonly used in clinical settings, offer advantages such as low radiation and cost-efficiency. However, their limitation lies in the inability to distinctly visualize overlapping organs. In contrast, Computed Tomography (CT) scans provide a three-dimensional view, overcoming this drawback but at the expense of higher radiation doses and increased costs. Hence, from both the patient's and hospital's standpoints, there is substantial medical and practical value in attempting the reconstruction from two-dimensional X-ray images to three-dimensional CT images. In this paper, we introduce DP-GAN+B as a pioneering approach for transforming two-dimensional frontal and lateral lung X-rays into three-dimensional lung CT volumes. Our method innovatively employs depthwise separable convolutions instead of traditional convolutions and introduces vector and fusion loss for superior performance. Compared to prior models, DP-GAN+B significantly reduces the generator network parameters by 21.104 M and the discriminator network parameters by 10.82 M, resulting in a total reduction of 31.924 M (44.17%). Experimental results demonstrate that our network can effectively generate clinically relevant, high-quality CT images from X-ray data, presenting a promising solution for enhancing diagnostic imaging while mitigating cost and radiation concerns.
Collapse
Affiliation(s)
- Xinlong Xing
- Postgraduate Training Base Alliance of Wenzhou Medical University, Wenzhou, Zhejiang, 325000, China; Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang, 325000, China
| | - Xiaosen Li
- School of Artificial Intelligence, Guangxi Minzu University, Nanning, 530006, China
| | - Chaoyi Wei
- Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang, 325000, China
| | - Zhantian Zhang
- Postgraduate Training Base Alliance of Wenzhou Medical University, Wenzhou, Zhejiang, 325000, China; Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang, 325000, China
| | - Ou Liu
- Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang, 325000, China.
| | - Senmiao Xie
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, China
| | - Haoman Chen
- Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang, 325000, China
| | - Shichao Quan
- Department of Big Data in Health Science, The First Affiliated Hospital of Wenzhou Medical University, China
| | - Cong Wang
- Department of Mathematics and Statistics, Carleton College, 300 N College St, Northfield, MN, 55057, USA
| | - Xin Yang
- School of Computer Science and Software Engineering, University of Science and Technology Liaoning, Anshan, 114051, China
| | - Xiaoming Jiang
- Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang, 325000, China
| | - Jianwei Shuai
- Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang, 325000, China.
| |
Collapse
|
6
|
Vrettos K, Koltsakis E, Zibis AH, Karantanas AH, Klontzas ME. Generative adversarial networks for spine imaging: A critical review of current applications. Eur J Radiol 2024; 171:111313. [PMID: 38237518 DOI: 10.1016/j.ejrad.2024.111313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 12/18/2023] [Accepted: 01/09/2024] [Indexed: 02/10/2024]
Abstract
PURPOSE In recent years, the field of medical imaging has witnessed remarkable advancements, with innovative technologies which revolutionized the visualization and analysis of the human spine. Among the groundbreaking developments in medical imaging, Generative Adversarial Networks (GANs) have emerged as a transformative tool, offering unprecedented possibilities in enhancing spinal imaging techniques and diagnostic outcomes. This review paper aims to provide a comprehensive overview of the use of GANs in spinal imaging, and to emphasize their potential to improve the diagnosis and treatment of spine-related disorders. A specific review focusing on Generative Adversarial Networks (GANs) in the context of medical spine imaging is needed to provide a comprehensive and specialized analysis of the unique challenges, applications, and advancements within this specific domain, which might not be fully addressed in broader reviews covering GANs in general medical imaging. Such a review can offer insights into the tailored solutions and innovations that GANs bring to the field of spinal medical imaging. METHODS An extensive literature search from 2017 until July 2023, was conducted using the most important search engines and identified studies that used GANs in spinal imaging. RESULTS The implementations include generating fat suppressed T2-weighted (fsT2W) images from T1 and T2-weighted sequences, to reduce scan time. The generated images had a significantly better image quality than true fsT2W images and could improve diagnostic accuracy for certain pathologies. GANs were also utilized in generating virtual thin-slice images of intervertebral spaces, creating digital twins of human vertebrae, and predicting fracture response. Lastly, they could be applied to convert CT to MRI images, with the potential to generate near-MR images from CT without MRI. CONCLUSIONS GANs have promising applications in personalized medicine, image augmentation, and improved diagnostic accuracy. However, limitations such as small databases and misalignment in CT-MRI pairs, must be considered.
Collapse
Affiliation(s)
- Konstantinos Vrettos
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
| | - Emmanouil Koltsakis
- Department of Radiology, Karolinska University Hospital, Solna, Stockholm, Sweden
| | - Aristeidis H Zibis
- Department of Anatomy, Medical School, University of Thessaly, Larissa, Greece
| | - Apostolos H Karantanas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece
| | - Michail E Klontzas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece.
| |
Collapse
|
7
|
Li B, Zhang J, Wang Q, Li H, Wang Q. Three-dimensional spine reconstruction from biplane radiographs using convolutional neural networks. Med Eng Phys 2024; 123:104088. [PMID: 38365341 DOI: 10.1016/j.medengphy.2023.104088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Revised: 12/04/2023] [Accepted: 12/10/2023] [Indexed: 02/18/2024]
Abstract
PURPOSE The purpose of this study was to develop and evaluate a deep learning network for three-dimensional reconstruction of the spine from biplanar radiographs. METHODS The proposed approach focused on extracting similar features and multiscale features of bone tissue in biplanar radiographs. Bone tissue features were reconstructed for feature representation across dimensions to generate three-dimensional volumes. The number of feature mappings was gradually reduced in the reconstruction to transform the high-dimensional features into the three-dimensional image domain. We produced and made eight public datasets to train and test the proposed network. Two evaluation metrics were proposed and combined with four classical evaluation metrics to measure the performance of the method. RESULTS In comparative experiments, the reconstruction results of this method achieved a Hausdorff distance of 1.85 mm, a surface overlap of 0.2 mm, a volume overlap of 0.9664, and an offset distance of only 0.21 mm from the vertebral body centroid. The results of this study indicate that the proposed method is reliable.
Collapse
Affiliation(s)
- Bo Li
- Department of Electronic Engineering, Yunnan University, Kunming, China
| | - Junhua Zhang
- Department of Electronic Engineering, Yunnan University, Kunming, China.
| | - Qian Wang
- Department of Electronic Engineering, Yunnan University, Kunming, China
| | - Hongjian Li
- The First People's Hospital of Yunnan Province, China
| | - Qiyang Wang
- The First People's Hospital of Yunnan Province, China
| |
Collapse
|
8
|
Saravi B, Guzel HE, Zink A, Ülkümen S, Couillard-Despres S, Wollborn J, Lang G, Hassel F. Synthetic 3D Spinal Vertebrae Reconstruction from Biplanar X-rays Utilizing Generative Adversarial Networks. J Pers Med 2023; 13:1642. [PMID: 38138869 PMCID: PMC10744485 DOI: 10.3390/jpm13121642] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/20/2023] [Accepted: 11/23/2023] [Indexed: 12/24/2023] Open
Abstract
Computed tomography (CT) offers detailed insights into the internal anatomy of patients, particularly for spinal vertebrae examination. However, CT scans are associated with higher radiation exposure and cost compared to conventional X-ray imaging. In this study, we applied a Generative Adversarial Network (GAN) framework to reconstruct 3D spinal vertebrae structures from synthetic biplanar X-ray images, specifically focusing on anterior and lateral views. The synthetic X-ray images were generated using the DRRGenerator module in 3D Slicer by incorporating segmentations of spinal vertebrae in CT scans for the region of interest. This approach leverages a novel feature fusion technique based on X2CT-GAN to combine information from both views and employs a combination of mean squared error (MSE) loss and adversarial loss to train the generator, resulting in high-quality synthetic 3D spinal vertebrae CTs. A total of n = 440 CT data were processed. We evaluated the performance of our model using multiple metrics, including mean absolute error (MAE) (for each slice of the 3D volume (MAE0) and for the entire 3D volume (MAE)), cosine similarity, peak signal-to-noise ratio (PSNR), 3D peak signal-to-noise ratio (PSNR-3D), and structural similarity index (SSIM). The average PSNR was 28.394 dB, PSNR-3D was 27.432, SSIM was 0.468, cosine similarity was 0.484, MAE0 was 0.034, and MAE was 85.359. The results demonstrated the effectiveness of this approach in reconstructing 3D spinal vertebrae structures from biplanar X-rays, although some limitations in accurately capturing the fine bone structures and maintaining the precise morphology of the vertebrae were present. This technique has the potential to enhance the diagnostic capabilities of low-cost X-ray machines while reducing radiation exposure and cost associated with CT scans, paving the way for future applications in spinal imaging and diagnosis.
Collapse
Affiliation(s)
- Babak Saravi
- Department of Orthopedics and Trauma Surgery, Medical Center—University of Freiburg, Faculty of Medicine, University of Freiburg, 79106 Freiburg, Germany; (S.Ü.); (G.L.)
- Department of Spine Surgery, Loretto Hospital, 79100 Freiburg, Germany; (A.Z.); (F.H.)
- Institute of Experimental Neuroregeneration, Spinal Cord Injury and Tissue Regeneration Center Salzburg (SCI-TReCS), Paracelsus Medical University, 5020 Salzburg, Austria;
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA 02115, USA;
| | - Hamza Eren Guzel
- Department of Radiology, University of Health Sciences, Izmir Bozyaka Training and Research Hospital, Izmir 35170, Türkiye;
| | - Alisia Zink
- Department of Spine Surgery, Loretto Hospital, 79100 Freiburg, Germany; (A.Z.); (F.H.)
| | - Sara Ülkümen
- Department of Orthopedics and Trauma Surgery, Medical Center—University of Freiburg, Faculty of Medicine, University of Freiburg, 79106 Freiburg, Germany; (S.Ü.); (G.L.)
| | - Sebastien Couillard-Despres
- Institute of Experimental Neuroregeneration, Spinal Cord Injury and Tissue Regeneration Center Salzburg (SCI-TReCS), Paracelsus Medical University, 5020 Salzburg, Austria;
- Austrian Cluster for Tissue Regeneration, 1200 Vienna, Austria
| | - Jakob Wollborn
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA 02115, USA;
| | - Gernot Lang
- Department of Orthopedics and Trauma Surgery, Medical Center—University of Freiburg, Faculty of Medicine, University of Freiburg, 79106 Freiburg, Germany; (S.Ü.); (G.L.)
| | - Frank Hassel
- Department of Spine Surgery, Loretto Hospital, 79100 Freiburg, Germany; (A.Z.); (F.H.)
| |
Collapse
|
9
|
Nguyen DCT, Benameur S, Mignotte M, Lavoie F. 3D biplanar reconstruction of lower limbs using nonlinear statistical models. Med Biol Eng Comput 2023; 61:2877-2894. [PMID: 37505415 DOI: 10.1007/s11517-023-02882-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 06/20/2023] [Indexed: 07/29/2023]
Abstract
Three-dimensional (3D) reconstruction of lower limbs is of great interest in surgical planning, computer assisted surgery, and for biomechanical applications. The use of 3D imaging modalities such as computed tomography (CT) scan and magnetic resonance imaging (MRI) has limitations such as high radiation and expense. Therefore, three-dimensional reconstruction methods from biplanar X-ray images represent an attractive alternative. In this paper, we present a new unsupervised 3D reconstruction method for the patella, talus, and pelvis using calibrated biplanar (45- and 135-degree oblique) radiographic images and a prior information on the geometric/anatomical structure of these complex bones. A multidimensional scaling (MDS)-based nonlinear dimensionality reduction algorithm is applied to exploit this prior geometric/anatomical information. It represents relevant deformations existing in the training set. Our method is based on a hybrid-likelihood using regions and contours. The edge-based notion represents the relation between the external contours of the bone projections and an edge potential field estimated on the radiographic images. Region-based notion is the non-overlapping ratio between segmented and projected bone regions of interest (RoIs). Our automatic 3D reconstruction model entails stochastically minimizing an energy function allowing an estimation of deformation parameters of the bone shape. This 3D reconstruction method has been successfully tested on 13 biplanar radiographic image pairs, yielding very promising results.
Collapse
Affiliation(s)
- Dac Cong Tai Nguyen
- Département d'Informatique et de Recherche Opérationnelle (DIRO), Université de Montréal, Québec, Montréal, Canada.
- Eiffel Medtech Inc., Québec, Montréal, Canada.
| | | | - Max Mignotte
- Département d'Informatique et de Recherche Opérationnelle (DIRO), Université de Montréal, Québec, Montréal, Canada
| | - Frédéric Lavoie
- Eiffel Medtech Inc., Québec, Montréal, Canada
- Orthopedic Surgery Department, Centre Hospitalier de l'Université de Montréal (CHUM), Québec, Montréal, Canada
| |
Collapse
|
10
|
Oh J, Hwang S, Lee J. Enhancing X-ray-Based Wrist Fracture Diagnosis Using HyperColumn-Convolutional Block Attention Module. Diagnostics (Basel) 2023; 13:2927. [PMID: 37761294 PMCID: PMC10529517 DOI: 10.3390/diagnostics13182927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 09/10/2023] [Accepted: 09/11/2023] [Indexed: 09/29/2023] Open
Abstract
Fractures affect nearly 9.45% of the South Korean population, with radiography being the primary diagnostic tool. This research employs a machine-learning methodology that integrates HyperColumn techniques with the convolutional block attention module (CBAM) to enhance fracture detection in X-ray radiographs. Utilizing the EfficientNet-B0 and DenseNet169 models bolstered by the HyperColumn and the CBAM, distinct improvements in fracture site prediction emerge. Significantly, when HyperColumn and CBAM integration is applied, both DenseNet169 and EfficientNet-B0 showed noteworthy accuracy improvements, with increases of approximately 0.69% and 0.70%, respectively. The HyperColumn-CBAM-DenseNet169 model particularly stood out, registering an uplift in the AUC score from 0.8778 to 0.9145. The incorporation of Grad-CAM technology refined the heatmap's focus, achieving alignment with expert-recognized fracture sites and alleviating the deep-learning challenge of heavy reliance on bounding box annotations. This innovative approach signifies potential strides in streamlining training processes and augmenting diagnostic precision in fracture detection.
Collapse
Affiliation(s)
- Joonho Oh
- Department of Mechanical Engineering, Chosun University, Gwangju 61452, Republic of Korea;
| | - Sangwon Hwang
- Department of Precision Medicine, Yonsei University Wonju College of Medicine, Wonju 26426, Republic of Korea;
| | - Joong Lee
- Artificial Intelligence BigData Medical Center, Yonsei University Wonju College of Medicine, Wonju 26426, Republic of Korea
| |
Collapse
|
11
|
Aubert B, Cresson T, de Guise JA, Vazquez C. X-Ray to DRR Images Translation for Efficient Multiple Objects Similarity Measures in Deformable Model 3D/2D Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:897-909. [PMID: 36318556 DOI: 10.1109/tmi.2022.3218568] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The robustness and accuracy of the intensity-based 3D/2D registration of a 3D model on planar X-ray image(s) is related to the quality of the image correspondences between the digitally reconstructed radiographs (DRR) generated from the 3D models (varying image) and the X-ray images (fixed target). While much effort may be devoted to generating realistic DRR that are similar to real X-rays (using complex X-ray simulation, adding densities information in 3D models, etc.), significant differences still remain between DRR and real X-ray images. Differences such as the presence of adjacent or superimposed soft tissue and bony or foreign structures lead to image matching difficulties and decrease the 3D/2D registration performance. In the proposed method, the X-ray images were converted into DRR images using a GAN-based cross-modality image-to-images translation. With this added prior step of XRAY-to-DRR translation, standard similarity measures become efficient even when using simple and fast DRR projection. For both images to match, they must belong to the same image domain and essentially contain the same kind of information. The XRAY-to-DRR translation also addresses the well-known issue of registering an object in a scene composed of multiple objects by separating the superimposed or/and adjacent objects to avoid mismatching across similar structures. We applied the proposed method to the 3D/2D fine registration of vertebra deformable models to biplanar radiographs of the spine. We showed that the XRAY-to-DRR translation enhances the registration results, by increasing the capture range and decreasing dependence on the similarity measure choice since the multi-modal registration becomes mono-modal.
Collapse
|