1
|
De Wilde D, Zanier O, Da Mutten R, Jin M, Regli L, Serra C, Staartjes VE. Strategies for generating synthetic computed tomography-like imaging from radiographs: A scoping review. Med Image Anal 2025; 101:103454. [PMID: 39793215 DOI: 10.1016/j.media.2025.103454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Revised: 11/18/2024] [Accepted: 01/03/2025] [Indexed: 01/13/2025]
Abstract
BACKGROUND Advancements in tomographic medical imaging have revolutionized diagnostics and treatment monitoring by offering detailed 3D visualization of internal structures. Despite the significant value of computed tomography (CT), challenges such as high radiation dosage and cost barriers limit its accessibility, especially in low- and middle-income countries. Recognizing the potential of radiographic imaging in reconstructing CT images, this scoping review aims to explore the emerging field of synthesizing 3D CT-like images from 2D radiographs by examining the current methodologies. METHODS A scoping review was carried out following PRISMA-SR guidelines. Eligibility criteria for the articles included full-text articles published up to September 9, 2024, studying methodologies for the synthesis of 3D CT images from 2D biplanar or four-projection x-ray images. Eligible articles were sourced from PubMed MEDLINE, Embase, and arXiv. RESULTS 76 studies were included. The majority (50.8 %, n = 30) were published between 2010 and 2020 (38.2 %, n = 29) and from 2020 onwards (36.8 %, n = 28), with European (40.8 %, n = 31), North American (26.3 %, n = 20), and Asian (32.9 %, n = 25) institutions being primary contributors. Anatomical regions varied, with 17.1 % (n = 13) of studies not using clinical data. Further, studies focused on the chest (25 %, n = 19), spine and vertebrae (17.1 %, n = 13), coronary arteries (10.5 %, n = 8), and cranial structures (10.5 %, n = 8), among other anatomical regions. Convolutional neural networks (CNN) (19.7 %, n = 15), generative adversarial networks (21.1 %, n = 16) and statistical shape models (15.8 %, n = 12) emerged as the most applied methodologies. A limited number of studies included explored the use of conditional diffusion models, iterative reconstruction algorithms, statistical shape models, and digital tomosynthesis. CONCLUSION This scoping review summarizes current strategies and challenges in synthetic imaging generation. The development of 3D CT-like imaging from 2D radiographs could reduce radiation risk while simultaneously addressing financial and logistical obstacles that impede global access to CT imaging. Despite initial promising results, the field encounters challenges with varied methodologies and frequent lack of proper validation, requiring further research to define synthetic imaging's clinical role.
Collapse
Affiliation(s)
- Daniel De Wilde
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Olivier Zanier
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Raffaele Da Mutten
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Michael Jin
- Department of Neurosurgery, Stanford University, Stanford, California, USA
| | - Luca Regli
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Carlo Serra
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Victor E Staartjes
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland.
| |
Collapse
|
2
|
Fujita A, Goto K, Ueda A, Kuroda Y, Kawai T, Okuzu Y, Okuno Y, Matsuda S. Measurement of the Acetabular Cup Orientation After Total Hip Arthroplasty Based on 3-Dimensional Reconstruction From a Single X-Ray Image Using Generative Adversarial Networks. J Arthroplasty 2025; 40:136-143.e1. [PMID: 38944061 DOI: 10.1016/j.arth.2024.06.059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 06/20/2024] [Accepted: 06/24/2024] [Indexed: 07/01/2024] Open
Abstract
BACKGROUND The purpose of this study was to reconstruct 3-dimensional (3D) computed tomography (CT) images from single anteroposterior (AP) postoperative total hip arthroplasty (THA) X-ray images using a deep learning algorithm known as generative adversarial networks (GANs) and to validate the accuracy of cup angle measurement on GAN-generated CT. METHODS We used 2 GAN-based models, CycleGAN and X2CT-GAN, to generate 3D CT images from X-ray images of 386 patients who underwent primary THAs using a cementless cup. The training dataset consisted of 522 CT images and 2,282 X-ray images. The image quality was validated using the peak signal-to-noise ratio and the structural similarity index measure. The cup anteversion and inclination measurements on the GAN-generated CT images were compared with the actual CT measurements. Statistical analyses of absolute measurement errors were performed using Mann-Whitney U tests and nonlinear regression analyses. RESULTS The study successfully achieved 3D reconstruction from single AP postoperative THA X-ray images using GANs, exhibiting excellent peak signal-to-noise ratio (37.40) and structural similarity index measure (0.74). The median absolute difference in radiographic anteversion was 3.45° and the median absolute difference in radiographic inclination was 3.25°, respectively. Absolute measurement errors tended to be larger in cases with cup malposition than in those with optimal cup orientation. CONCLUSIONS This study demonstrates the potential of GANs for 3D reconstruction from single AP postoperative THA X-ray images to evaluate cup orientation. Further investigation and refinement of this model are required to improve its performance.
Collapse
Affiliation(s)
- Akira Fujita
- Department of Orthopaedic Surgery, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan; Department of Biomedical Data Intelligence, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan.
| | - Koji Goto
- Department of Orthopaedic Surgery, Kindai University Hospital, Osaka, Japan
| | - Akihiko Ueda
- Department of Biomedical Data Intelligence, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Yutaka Kuroda
- Department of Orthopaedic Surgery, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Toshiyuki Kawai
- Department of Orthopaedic Surgery, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Yaichiro Okuzu
- Department of Orthopaedic Surgery, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Yasushi Okuno
- Department of Biomedical Data Intelligence, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Shuichi Matsuda
- Department of Orthopaedic Surgery, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| |
Collapse
|
3
|
Jecklin S, Shen Y, Gout A, Suter D, Calvet L, Zingg L, Straub J, Cavalcanti NA, Farshad M, Fürnstahl P, Esfandiari H. Domain adaptation strategies for 3D reconstruction of the lumbar spine using real fluoroscopy data. Med Image Anal 2024; 98:103322. [PMID: 39197301 DOI: 10.1016/j.media.2024.103322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 06/13/2024] [Accepted: 08/20/2024] [Indexed: 09/01/2024]
Abstract
In this study, we address critical barriers hindering the widespread adoption of surgical navigation in orthopedic surgeries due to limitations such as time constraints, cost implications, radiation concerns, and integration within the surgical workflow. Recently, our work X23D showed an approach for generating 3D anatomical models of the spine from only a few intraoperative fluoroscopic images. This approach negates the need for conventional registration-based surgical navigation by creating a direct intraoperative 3D reconstruction of the anatomy. Despite these strides, the practical application of X23D has been limited by a significant domain gap between synthetic training data and real intraoperative images. In response, we devised a novel data collection protocol to assemble a paired dataset consisting of synthetic and real fluoroscopic images captured from identical perspectives. Leveraging this unique dataset, we refined our deep learning model through transfer learning, effectively bridging the domain gap between synthetic and real X-ray data. We introduce an innovative approach combining style transfer with the curated paired dataset. This method transforms real X-ray images into the synthetic domain, enabling the in-silico-trained X23D model to achieve high accuracy in real-world settings. Our results demonstrated that the refined model can rapidly generate accurate 3D reconstructions of the entire lumbar spine from as few as three intraoperative fluoroscopic shots. The enhanced model reached a sufficient accuracy, achieving an 84% F1 score, equating to the benchmark set solely by synthetic data in previous research. Moreover, with an impressive computational time of just 81.1 ms, our approach offers real-time capabilities, vital for successful integration into active surgical procedures. By investigating optimal imaging setups and view angle dependencies, we have further validated the practicality and reliability of our system in a clinical environment. Our research represents a promising advancement in intraoperative 3D reconstruction. This innovation has the potential to enhance intraoperative surgical planning, navigation, and surgical robotics.
Collapse
Affiliation(s)
- Sascha Jecklin
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland.
| | - Youyang Shen
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Amandine Gout
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Daniel Suter
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Lilian Calvet
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Lukas Zingg
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Jennifer Straub
- Universitätsklinik für Orthopädie, AKH Wien, Währinger Gürtel 18-20, 1090 Wien, Austria
| | - Nicola Alessandro Cavalcanti
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Mazda Farshad
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Philipp Fürnstahl
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Hooman Esfandiari
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| |
Collapse
|
4
|
Burton W, Myers C, Stefanovic M, Shelburne K, Rullkoetter P. Scan-Free and Fully Automatic Tracking of Native Knee Anatomy from Dynamic Stereo-Radiography with Statistical Shape and Intensity Models. Ann Biomed Eng 2024; 52:1591-1603. [PMID: 38558356 DOI: 10.1007/s10439-024-03473-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 02/09/2024] [Indexed: 04/04/2024]
Abstract
Kinematic tracking of native anatomy from stereo-radiography provides a quantitative basis for evaluating human movement. Conventional tracking procedures require significant manual effort and call for acquisition and annotation of subject-specific volumetric medical images. The current work introduces a framework for fully automatic tracking of native knee anatomy from dynamic stereo-radiography which forgoes reliance on volumetric scans. The method consists of three computational steps. First, captured radiographs are annotated with segmentation maps and anatomic landmarks using a convolutional neural network. Next, a non-convex polynomial optimization problem formulated from annotated landmarks is solved to acquire preliminary anatomy and pose estimates. Finally, a global optimization routine is performed for concurrent refinement of anatomy and pose. An objective function is maximized which quantifies similarities between masked radiographs and digitally reconstructed radiographs produced from statistical shape and intensity models. The proposed framework was evaluated against manually tracked trials comprising dynamic activities, and additional frames capturing a static knee phantom. Experiments revealed anatomic surface errors routinely below 1.0 mm in both evaluation cohorts. Median absolute errors of individual bone pose estimates were below 1.0∘ or mm for 15 out of 18 degrees of freedom in both evaluation cohorts. Results indicate that accurate pose estimation of native anatomy from stereo-radiography may be performed with significantly reduced manual effort, and without reliance on volumetric scans.
Collapse
Affiliation(s)
- William Burton
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80208, USA.
| | - Casey Myers
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80208, USA
| | - Margareta Stefanovic
- Department of Electrical and Computer Engineering, University of Denver, 2155 E Wesley Ave, Denver, CO, 80208, USA
| | - Kevin Shelburne
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80208, USA
| | - Paul Rullkoetter
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80208, USA
| |
Collapse
|
5
|
Li B, Zhang J, Wang Q, Li H, Wang Q. Three-dimensional spine reconstruction from biplane radiographs using convolutional neural networks. Med Eng Phys 2024; 123:104088. [PMID: 38365341 DOI: 10.1016/j.medengphy.2023.104088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Revised: 12/04/2023] [Accepted: 12/10/2023] [Indexed: 02/18/2024]
Abstract
PURPOSE The purpose of this study was to develop and evaluate a deep learning network for three-dimensional reconstruction of the spine from biplanar radiographs. METHODS The proposed approach focused on extracting similar features and multiscale features of bone tissue in biplanar radiographs. Bone tissue features were reconstructed for feature representation across dimensions to generate three-dimensional volumes. The number of feature mappings was gradually reduced in the reconstruction to transform the high-dimensional features into the three-dimensional image domain. We produced and made eight public datasets to train and test the proposed network. Two evaluation metrics were proposed and combined with four classical evaluation metrics to measure the performance of the method. RESULTS In comparative experiments, the reconstruction results of this method achieved a Hausdorff distance of 1.85 mm, a surface overlap of 0.2 mm, a volume overlap of 0.9664, and an offset distance of only 0.21 mm from the vertebral body centroid. The results of this study indicate that the proposed method is reliable.
Collapse
Affiliation(s)
- Bo Li
- Department of Electronic Engineering, Yunnan University, Kunming, China
| | - Junhua Zhang
- Department of Electronic Engineering, Yunnan University, Kunming, China.
| | - Qian Wang
- Department of Electronic Engineering, Yunnan University, Kunming, China
| | - Hongjian Li
- The First People's Hospital of Yunnan Province, China
| | - Qiyang Wang
- The First People's Hospital of Yunnan Province, China
| |
Collapse
|
6
|
Sunilkumar AP, Keshari Parida B, You W. Recent Advances in Dental Panoramic X-Ray Synthesis and Its Clinical Applications. IEEE ACCESS 2024; 12:141032-141051. [DOI: 10.1109/access.2024.3422650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Affiliation(s)
- Anusree P. Sunilkumar
- Department of Information and Communication Engineering, Artificial Intelligence and Image Processing Laboratory (AIIP Laboratory), Sun Moon University, Asan-si, Republic of Korea
| | - Bikram Keshari Parida
- Department of Information and Communication Engineering, Artificial Intelligence and Image Processing Laboratory (AIIP Laboratory), Sun Moon University, Asan-si, Republic of Korea
| | - Wonsang You
- Department of Information and Communication Engineering, Artificial Intelligence and Image Processing Laboratory (AIIP Laboratory), Sun Moon University, Asan-si, Republic of Korea
| |
Collapse
|
7
|
Rahman H, Khan AR, Sadiq T, Farooqi AH, Khan IU, Lim WH. A Systematic Literature Review of 3D Deep Learning Techniques in Computed Tomography Reconstruction. Tomography 2023; 9:2158-2189. [PMID: 38133073 PMCID: PMC10748093 DOI: 10.3390/tomography9060169] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 11/27/2023] [Accepted: 12/01/2023] [Indexed: 12/23/2023] Open
Abstract
Computed tomography (CT) is used in a wide range of medical imaging diagnoses. However, the reconstruction of CT images from raw projection data is inherently complex and is subject to artifacts and noise, which compromises image quality and accuracy. In order to address these challenges, deep learning developments have the potential to improve the reconstruction of computed tomography images. In this regard, our research aim is to determine the techniques that are used for 3D deep learning in CT reconstruction and to identify the training and validation datasets that are accessible. This research was performed on five databases. After a careful assessment of each record based on the objective and scope of the study, we selected 60 research articles for this review. This systematic literature review revealed that convolutional neural networks (CNNs), 3D convolutional neural networks (3D CNNs), and deep learning reconstruction (DLR) were the most suitable deep learning algorithms for CT reconstruction. Additionally, two major datasets appropriate for training and developing deep learning systems were identified: 2016 NIH-AAPM-Mayo and MSCT. These datasets are important resources for the creation and assessment of CT reconstruction models. According to the results, 3D deep learning may increase the effectiveness of CT image reconstruction, boost image quality, and lower radiation exposure. By using these deep learning approaches, CT image reconstruction may be made more precise and effective, improving patient outcomes, diagnostic accuracy, and healthcare system productivity.
Collapse
Affiliation(s)
- Hameedur Rahman
- Department of Computer Games Development, Faculty of Computing & AI, Air University, E9, Islamabad 44000, Pakistan;
| | - Abdur Rehman Khan
- Department of Creative Technologies, Faculty of Computing & AI, Air University, E9, Islamabad 44000, Pakistan;
| | - Touseef Sadiq
- Centre for Artificial Intelligence Research, Department of Information and Communication Technology, University of Agder, Jon Lilletuns vei 9, 4879 Grimstad, Norway
| | - Ashfaq Hussain Farooqi
- Department of Computer Science, Faculty of Computing AI, Air University, Islamabad 44000, Pakistan;
| | - Inam Ullah Khan
- Department of Electronic Engineering, School of Engineering & Applied Sciences (SEAS), Isra University, Islamabad Campus, Islamabad 44000, Pakistan;
| | - Wei Hong Lim
- Faculty of Engineering, Technology and Built Environment, UCSI University, Kuala Lumpur 56000, Malaysia;
| |
Collapse
|
8
|
Vitković N, Stojković JR, Korunović N, Teuţan E, Pleşa A, Ianoşi-Andreeva-Dimitrova A, Górski F, Păcurar R. Extra-Articular Distal Humerus Plate 3D Model Creation by Using the Method of Anatomical Features. MATERIALS (BASEL, SWITZERLAND) 2023; 16:5409. [PMID: 37570113 PMCID: PMC10420112 DOI: 10.3390/ma16155409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 07/26/2023] [Accepted: 07/30/2023] [Indexed: 08/13/2023]
Abstract
Proper fixation techniques are crucial in orthopedic surgery for the treatment of various medical conditions. Fractures of the distal humerus can occur due to either high-energy trauma with skin rupture or low-energy trauma in osteoporotic bone. The recommended surgical approach for treating these extra-articular distal humerus fractures involves performing an open reduction and internal fixation procedure using plate implants. This surgical intervention plays a crucial role in enhancing patient recovery and minimizing soft tissue complications. Dynamic Compression Plates (DCPs) and Locking Compression Plates (LCPs) are commonly used for bone fixation, with LCP extra-articular distal humerus plates being the preferred choice for extra-articular fractures. These fixation systems have anatomically shaped designs that provide angular stability to the bone. However, depending on the shape and position of the bone fracture, additional plate bending may be required during surgery. This can pose challenges such as increased surgery time and the risk of incorrect plate shaping. To enhance the accuracy of plate placement, the study introduces the Method of Anatomical Features (MAF) in conjunction with the Characteristic Product Features methodology (CPF). The utilization of the MAF enables the development of a parametric model for the contact surface between the plate and the humerus. This model is created using specialized Referential Geometrical Entities (RGEs), Constitutive Geometrical Entities (CGEs), and Regions of Interest (ROI) that are specific to the human humerus bone. By utilizing this anatomically tailored contact surface model, the standard plate model can be customized (bent) to precisely conform to the distinct shape of the patient's humerus bone during the pre-operative planning phase. Alternatively, the newly designed model can be fabricated using a specific manufacturing technology. This approach aims to improve geometrical accuracy of plate fixation, thus optimizing surgical outcomes and patient recovery.
Collapse
Affiliation(s)
- Nikola Vitković
- Faculty of Mechanical Engineering, University of Nis, Aleksandra Medvedeva, 18000 Nis, Serbia; (J.R.S.); (N.K.)
| | - Jelena R. Stojković
- Faculty of Mechanical Engineering, University of Nis, Aleksandra Medvedeva, 18000 Nis, Serbia; (J.R.S.); (N.K.)
| | - Nikola Korunović
- Faculty of Mechanical Engineering, University of Nis, Aleksandra Medvedeva, 18000 Nis, Serbia; (J.R.S.); (N.K.)
| | - Emil Teuţan
- Department of Mechatronics and Machine Dynamics, Faculty of Automotive, Mechatronics and Mechanical Engineering, Technical University of Cluj-Napoca, Blv. Muncii, No. 103-105, 400641 Cluj-Napoca, Romania; (E.T.); (A.P.); (A.I.-A.-D.)
| | - Alin Pleşa
- Department of Mechatronics and Machine Dynamics, Faculty of Automotive, Mechatronics and Mechanical Engineering, Technical University of Cluj-Napoca, Blv. Muncii, No. 103-105, 400641 Cluj-Napoca, Romania; (E.T.); (A.P.); (A.I.-A.-D.)
| | - Alexandru Ianoşi-Andreeva-Dimitrova
- Department of Mechatronics and Machine Dynamics, Faculty of Automotive, Mechatronics and Mechanical Engineering, Technical University of Cluj-Napoca, Blv. Muncii, No. 103-105, 400641 Cluj-Napoca, Romania; (E.T.); (A.P.); (A.I.-A.-D.)
| | - Filip Górski
- Faculty of Mechanical Engineering, Poznan University of Technology, Piotrowo 3 STR, 61-138 Poznan, Poland;
| | - Răzvan Păcurar
- Department of Manufacturing Engineering, Faculty of Industrial Engineering, Robotics and Production Management, Technical University of Cluj-Napoca, Blv. Muncii, No. 103-105, 400641 Cluj-Napoca, Romania
| |
Collapse
|
9
|
Sarmah M, Neelima A, Singh HR. Survey of methods and principles in three-dimensional reconstruction from two-dimensional medical images. Vis Comput Ind Biomed Art 2023; 6:15. [PMID: 37495817 PMCID: PMC10371974 DOI: 10.1186/s42492-023-00142-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 06/27/2023] [Indexed: 07/28/2023] Open
Abstract
Three-dimensional (3D) reconstruction of human organs has gained attention in recent years due to advances in the Internet and graphics processing units. In the coming years, most patient care will shift toward this new paradigm. However, development of fast and accurate 3D models from medical images or a set of medical scans remains a daunting task due to the number of pre-processing steps involved, most of which are dependent on human expertise. In this review, a survey of pre-processing steps was conducted, and reconstruction techniques for several organs in medical diagnosis were studied. Various methods and principles related to 3D reconstruction were highlighted. The usefulness of 3D reconstruction of organs in medical diagnosis was also highlighted.
Collapse
Affiliation(s)
- Mriganka Sarmah
- Department of Computer Science and Engineering, National Institute of Technology, Nagaland, 797103, India.
| | - Arambam Neelima
- Department of Computer Science and Engineering, National Institute of Technology, Nagaland, 797103, India
| | - Heisnam Rohen Singh
- Department of Information Technology, Nagaland University, Nagaland, 797112, India
| |
Collapse
|
10
|
Sun W, Zhao Y, Liu J, Zheng G. LatentPCN: latent space-constrained point cloud network for reconstruction of 3D patient-specific bone surface models from calibrated biplanar X-ray images. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02877-3. [PMID: 37027083 DOI: 10.1007/s11548-023-02877-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 03/15/2023] [Indexed: 04/08/2023]
Abstract
PURPOSE Accurate three-dimensional (3D) models play crucial roles in computer assisted planning and interventions. MR or CT images are frequently used to derive 3D models but have the disadvantages that they are expensive or involving ionizing radiation (e.g., CT acquisition). An alternative method based on calibrated 2D biplanar X-ray images is highly desired. METHODS A point cloud network, referred as LatentPCN, is developed for reconstruction of 3D surface models from calibrated biplanar X-ray images. LatentPCN consists of three components: an encoder, a predictor, and a decoder. During training, a latent space is learned to represent shape features. After training, LatentPCN maps sparse silhouettes generated from 2D images to a latent representation, which is taken as the input to the decoder to derive a 3D bone surface model. Additionally, LatentPCN allows for estimation of a patient-specific reconstruction uncertainty. RESULTS We designed and conducted comprehensive experiments on datasets of 25 simulated cases and 10 cadaveric cases to evaluate the performance of LatentLCN. On these two datasets, the mean reconstruction errors achieved by LatentLCN were 0.83 mm and 0.92 mm, respectively. A correlation between large reconstruction errors and high uncertainty in the reconstruction results was observed. CONCLUSION LatentPCN can reconstruct patient-specific 3D surface models from calibrated 2D biplanar X-ray images with high accuracy and uncertainty estimation. The sub-millimeter reconstruction accuracy on cadaveric cases demonstrates its potential for surgical navigation applications.
Collapse
Affiliation(s)
- Wenyuan Sun
- Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road, Shanghai, 200240, China
| | - Yuyun Zhao
- Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road, Shanghai, 200240, China
| | - Jihao Liu
- Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road, Shanghai, 200240, China
| | - Guoyan Zheng
- Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road, Shanghai, 200240, China.
| |
Collapse
|
11
|
Aubert B, Cresson T, de Guise JA, Vazquez C. X-Ray to DRR Images Translation for Efficient Multiple Objects Similarity Measures in Deformable Model 3D/2D Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:897-909. [PMID: 36318556 DOI: 10.1109/tmi.2022.3218568] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The robustness and accuracy of the intensity-based 3D/2D registration of a 3D model on planar X-ray image(s) is related to the quality of the image correspondences between the digitally reconstructed radiographs (DRR) generated from the 3D models (varying image) and the X-ray images (fixed target). While much effort may be devoted to generating realistic DRR that are similar to real X-rays (using complex X-ray simulation, adding densities information in 3D models, etc.), significant differences still remain between DRR and real X-ray images. Differences such as the presence of adjacent or superimposed soft tissue and bony or foreign structures lead to image matching difficulties and decrease the 3D/2D registration performance. In the proposed method, the X-ray images were converted into DRR images using a GAN-based cross-modality image-to-images translation. With this added prior step of XRAY-to-DRR translation, standard similarity measures become efficient even when using simple and fast DRR projection. For both images to match, they must belong to the same image domain and essentially contain the same kind of information. The XRAY-to-DRR translation also addresses the well-known issue of registering an object in a scene composed of multiple objects by separating the superimposed or/and adjacent objects to avoid mismatching across similar structures. We applied the proposed method to the 3D/2D fine registration of vertebra deformable models to biplanar radiographs of the spine. We showed that the XRAY-to-DRR translation enhances the registration results, by increasing the capture range and decreasing dependence on the similarity measure choice since the multi-modal registration becomes mono-modal.
Collapse
|
12
|
Gao C, Killeen BD, Hu Y, Grupp RB, Taylor RH, Armand M, Unberath M. Synthetic data accelerates the development of generalizable learning-based algorithms for X-ray image analysis. NAT MACH INTELL 2023; 5:294-308. [PMID: 38523605 PMCID: PMC10959504 DOI: 10.1038/s42256-023-00629-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 02/06/2023] [Indexed: 03/26/2024]
Abstract
Artificial intelligence (AI) now enables automated interpretation of medical images. However, AI's potential use for interventional image analysis remains largely untapped. This is because the post hoc analysis of data collected during live procedures has fundamental and practical limitations, including ethical considerations, expense, scalability, data integrity and a lack of ground truth. Here we demonstrate that creating realistic simulated images from human models is a viable alternative and complement to large-scale in situ data collection. We show that training AI image analysis models on realistically synthesized data, combined with contemporary domain generalization techniques, results in machine learning models that on real data perform comparably to models trained on a precisely matched real data training set. We find that our model transfer paradigm for X-ray image analysis, which we refer to as SyntheX, can even outperform real-data-trained models due to the effectiveness of training on a larger dataset. SyntheX provides an opportunity to markedly accelerate the conception, design and evaluation of X-ray-based intelligent systems. In addition, SyntheX provides the opportunity to test novel instrumentation, design complementary surgical approaches, and envision novel techniques that improve outcomes, save time or mitigate human error, free from the ethical and practical considerations of live human data collection.
Collapse
Affiliation(s)
- Cong Gao
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Benjamin D. Killeen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Robert B. Grupp
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Mehran Armand
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins Applied Physics Laboratory, Baltimore, MD, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
13
|
Cardoen T, Leroux S, Simoens P. Iterative Online 3D Reconstruction from RGB Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:9782. [PMID: 36560150 PMCID: PMC9784066 DOI: 10.3390/s22249782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/25/2022] [Accepted: 12/05/2022] [Indexed: 06/17/2023]
Abstract
3D reconstruction is the computer vision task of reconstructing the 3D shape of an object from multiple 2D images. Most existing algorithms for this task are designed for offline settings, producing a single reconstruction from a batch of images taken from diverse viewpoints. Alongside reconstruction accuracy, additional considerations arise when 3D reconstructions are used in real-time processing pipelines for applications such as robot navigation or manipulation. In these cases, an accurate 3D reconstruction is already required while the data gathering is still in progress. In this paper, we demonstrate how existing batch-based reconstruction algorithms lead to suboptimal reconstruction quality when used for online, iterative 3D reconstruction and propose appropriate modifications to the existing Pix2Vox++ architecture. When additional viewpoints become available at a high rate, e.g., from a camera mounted on a drone, selecting the most informative viewpoints is important in order to mitigate long term memory loss and to reduce the computational footprint. We present qualitative and quantitative results on the optimal selection of viewpoints and show that state-of-the-art reconstruction quality is already obtained with elementary selection algorithms.
Collapse
|
14
|
Jecklin S, Jancik C, Farshad M, Fürnstahl P, Esfandiari H. X23D-Intraoperative 3D Lumbar Spine Shape Reconstruction Based on Sparse Multi-View X-ray Data. J Imaging 2022; 8:271. [PMID: 36286365 PMCID: PMC9604813 DOI: 10.3390/jimaging8100271] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 09/07/2022] [Accepted: 09/27/2022] [Indexed: 11/16/2022] Open
Abstract
Visual assessment based on intraoperative 2D X-rays remains the predominant aid for intraoperative decision-making, surgical guidance, and error prevention. However, correctly assessing the 3D shape of complex anatomies, such as the spine, based on planar fluoroscopic images remains a challenge even for experienced surgeons. This work proposes a novel deep learning-based method to intraoperatively estimate the 3D shape of patients' lumbar vertebrae directly from sparse, multi-view X-ray data. High-quality and accurate 3D reconstructions were achieved with a learned multi-view stereo machine approach capable of incorporating the X-ray calibration parameters in the neural network. This strategy allowed a priori knowledge of the spinal shape to be acquired while preserving patient specificity and achieving a higher accuracy compared to the state of the art. Our method was trained and evaluated on 17,420 fluoroscopy images that were digitally reconstructed from the public CTSpine1K dataset. As evaluated by unseen data, we achieved an 88% average F1 score and a 71% surface score. Furthermore, by utilizing the calibration parameters of the input X-rays, our method outperformed a counterpart method in the state of the art by 22% in terms of surface score. This increase in accuracy opens new possibilities for surgical navigation and intraoperative decision-making solely based on intraoperative data, especially in surgical applications where the acquisition of 3D image data is not part of the standard clinical workflow.
Collapse
Affiliation(s)
- Sascha Jecklin
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Carla Jancik
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Mazda Farshad
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Philipp Fürnstahl
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Hooman Esfandiari
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| |
Collapse
|
15
|
Ji C, Li J, Praster M, Rath B, Hildebrand F, Eschweiler J. Smoothing the Undersampled Carpal Bone Model with Small Volume and Large Curvature: A Feasibility Study. Life (Basel) 2022; 12:life12050770. [PMID: 35629436 PMCID: PMC9145375 DOI: 10.3390/life12050770] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 05/10/2022] [Accepted: 05/18/2022] [Indexed: 12/27/2022] Open
Abstract
The carpal bones are eight small bones with irregularities and high curvature on their surfaces. The 3D model of the carpal bone serves as the foundation of further clinical applications, e.g., wrist kinematic behavior. However, due to the limitation of the Magnetic Resonance Imaging (MRI) technique, reconstructed carpal bone models are discretely undersampled, which has dramatic stair-step effects and leads to abnormal meshes on edges or surfaces, etc. Our study focuses on determining the viability of various smoothing techniques for a carpal model reconstructed by in vivo gathered MR images. Five algorithms, namely the Laplacian smoothing algorithm, the Laplacian smoothing algorithm with pre-dilation, the scale-dependent Laplacian algorithm, the curvature flow algorithm, and the inverse distance algorithm, were chosen for evaluation. The assessment took into account the Relative Volume Difference and the Hausdorff Distance as well as the surface quality and the preservation of morphological and morphometric properties. For the five algorithms, we analyzed the Relative Volume Difference and the Hausdorff Distance for all eight carpal bones. Among all the algorithms, the scale-dependent Laplacian method processed the best result regarding surface quality and the preservation of morphological and morphometric properties. Based on our extensive examinations, the scale-dependent Laplacian algorithm is suitable for the undersampled carpal bone model with small volume and large curvature.
Collapse
Affiliation(s)
- Chengcheng Ji
- Department of Orthopaedics, Trauma and Reconstructive Surgery, RWTH Aachen University Hospital, 52074 Aachen, Germany; (C.J.); (M.P.); (F.H.); (J.E.)
| | - Jianzhang Li
- Department of Orthopaedics, Trauma and Reconstructive Surgery, RWTH Aachen University Hospital, 52074 Aachen, Germany; (C.J.); (M.P.); (F.H.); (J.E.)
- Correspondence: ; Tel.: +49-(0)-241-808-8386
| | - Maximilian Praster
- Department of Orthopaedics, Trauma and Reconstructive Surgery, RWTH Aachen University Hospital, 52074 Aachen, Germany; (C.J.); (M.P.); (F.H.); (J.E.)
| | - Björn Rath
- Department of Orthopaedic Surgery, Klinikum Wels-Grieskirchen, 4600 Wels, Austria;
| | - Frank Hildebrand
- Department of Orthopaedics, Trauma and Reconstructive Surgery, RWTH Aachen University Hospital, 52074 Aachen, Germany; (C.J.); (M.P.); (F.H.); (J.E.)
| | - Jörg Eschweiler
- Department of Orthopaedics, Trauma and Reconstructive Surgery, RWTH Aachen University Hospital, 52074 Aachen, Germany; (C.J.); (M.P.); (F.H.); (J.E.)
| |
Collapse
|
16
|
Lu S, Li S, Wang Y, Zhang L, Hu Y, Li B. Prior information-based high-resolution tomography image reconstruction from a single digitally reconstructed radiograph. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac508d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 01/31/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Tomography images are essential for clinical diagnosis and trauma surgery, allowing doctors to understand the internal information of patients in more detail. Since the large amount of x-ray radiation from the continuous imaging during the process of computed tomography scanning can cause serious harm to the human body, reconstructing tomographic images from sparse views becomes a potential solution to this problem. Here we present a deep-learning framework for tomography image reconstruction, namely TIReconNet, which defines image reconstruction as a data-driven supervised learning task that allows a mapping between the 2D projection view and the 3D volume to emerge from corpus. The proposed framework consists of four parts: feature extraction module, shape mapping module, volume generation module and super resolution module. The proposed framework combines 2D and 3D operations, which can generate high-resolution tomographic images with a relatively small amount of computing resources and maintain spatial information. The proposed method is verified on chest digitally reconstructed radiographs, and the reconstructed tomography images have achieved PSNR value of 18.621 ± 1.228 dB and SSIM value of 0.872 ± 0.041 when compared against the ground truth. In conclusion, an innovative convolutional neural network architecture is proposed and validated in this study, which proves that there is the potential to generate a 3D high-resolution tomographic image from a single 2D image using deep learning. This method may actively promote the application of reconstruction technology for radiation reduction, and further exploration of intraoperative guidance in trauma and orthopedics.
Collapse
|
17
|
S. Akinbo R, A. Daramola O. Ensemble Machine Learning Algorithms for Prediction and Classification of Medical Images. ARTIF INTELL 2021. [DOI: 10.5772/intechopen.100602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The employment of machine learning algorithms in disease classification has evolved as a precision medicine for scientific innovation. The geometric growth in various machine learning systems has paved the way for more research in the medical imaging process. This research aims to promote the development of machine learning algorithms for the classification of medical images. Automated classification of medical images is a fascinating application of machine learning and they have the possibility of higher predictability and accuracy. The technological advancement in the processing of medical imaging will help to reduce the complexities of diseases and some existing constraints will be greatly minimized. This research exposes the main ensemble learning techniques as it covers the theoretical background of machine learning, applications, comparison of machine learning and deep learning, ensemble learning with reviews of state-of the art literature, framework, and analysis. The work extends to medical image types, applications, benefits, and operations. We proposed the application of the ensemble machine learning approach in the classification of medical images for better performance and accuracy. The integration of advanced technology in clinical imaging will help in the prompt classification, prediction, early detection, and a better interpretation of medical images, this will, in turn, improves the quality of life and expands the clinical bearing for machine learning applications.
Collapse
|