1
|
Abbott RE, Nishimwe A, Wiputra H, Breighner RE, Ellingson AM. OrthoFusion: A Super-Resolution Algorithm to Fuse Orthogonal CT Volumes. Res Sq 2024:rs.3.rs-4117386. [PMID: 38645068 PMCID: PMC11030529 DOI: 10.21203/rs.3.rs-4117386/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
OrthoFusion, an intuitive super-resolution algorithm, is presented in this study to enhance the spatial resolution of clinical CT volumes. The efficacy of OrthoFusion is evaluated, relative to high-resolution CT volumes (ground truth), by assessing image volume and derived bone morphological similarity, as well as its performance in specific applications in 2D-3D registration tasks. Results demonstrate that OrthoFusion significantly reduced segmentation time, while improving structural similarity of bone images and relative accuracy of derived bone model geometries. Moreover, it proved beneficial in the context of biplane videoradiography, enhancing the similarity of digitally reconstructed radiographs to radiographic images and improving the accuracy of relative bony kinematics. OrthoFusion's simplicity, ease of implementation, and generalizability make it a valuable tool for researchers and clinicians seeking high spatial resolution from existing clinical CT data. This study opens new avenues for retrospectively utilizing clinical images for research and advanced clinical purposes, while reducing the need for additional scans, mitigating associated costs and radiation exposure.
Collapse
|
2
|
Dabus G, Kotecha R, Linfante I, Wieczorek DJ, Gutierrez AN, Candela JG, McDermott MW. Analysis of potential time saving in brain arteriovenous malformation stereotactic radiosurgery planning using a new software platform. Med Dosim 2021; 47:38-42. [PMID: 34481717 DOI: 10.1016/j.meddos.2021.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 05/31/2021] [Accepted: 07/24/2021] [Indexed: 11/15/2022]
Abstract
To evaluate the utility of integrating a 3D vessel tree co-registration software platform into the stereotactic radiosurgery (SRS) workflow and its time saving for brain arteriovenous malformation (bAVM) treatment in adults compared to the conventional stereotactic head frame workflow. Eight consecutive adult bAVM cases were selected and retrospectively reviewed. Total number of angiograms and SRS procedures were 8. The electronic medical records were analyzed by time stamps to determine the length of time for each component of the set-up, transport, and frame removal. Times were averaged and the start of sedation by anesthesia used as a surrogate for the start of the frame application process. Reductions in workflow times were then modeled assuming cerebral angiography as a separate procedure. There were 8 adult bAVM cases included. Six were female. All patients had a single treatment session. Average age was 51.5 years (Range: 36-71). All patients were treated under monitored anesthesia care. In 6 patients, the AVM was deeply located (basal ganglia, midbrain, brainstem); in 2 cases, the lesion was frontal. Spetzler-Martin grades were 4 (50%) Grade 2 and 4 (50%) Grade 3. The average prescription isodose volume (PIV) and 12 Gy volumes (V12Gy) were 0.85 cc and 1.74 cc, respectively. The mean time from frame application to arrival in the angiography room was 111.5 minutes (range 40 to 171 min; median 107 min; SD 35.3 min); transport from angiography room to SRS was 47.5 minutes (range 15 to 107 min; median 36 min; SD 31.1 min), and frame removal after SRS was 20.5 minutes (range 10 to 47 min; median 16 min; SD 11.6 min). The average total additional time for the entire process of frame application, patient transportation, and frame removal was 132 minutes (range 87 to 181 min; median 127.5 min; SD 28.4 min). Therefore, assuming a non-frame based workflow and with angiography performed ahead of the actual radiosurgical treatment, the total time savings on the day of treatment was estimated at 132 minutes (range 87 to 181 min; median 127.5 min; SD 28.4 min). The ability to perform angiography, image fusion, and treatment planning for the actual day-of-delivery using 3-dimensional vessel tree co-registration could result in significant time savings over traditional workflow practices. Further experience with this system will evaluate its accuracy, reproducibility, and potential broader use in SRS workflow paradigms for the treatment of vascular pathologies. For bAVMs, the benefits of this time savings might allow for streamlined workflows on the day of SRS.
Collapse
Affiliation(s)
- Guilherme Dabus
- Miami Neuroscience Institute, Baptist Health South Florida, Miami, FL; Miami Cardiac & Vascular Institute, Baptist Health South Florida, Miami, FL; Herbert Wertheim College of Medicine, Florida International University, Miami, FL.
| | - Rupesh Kotecha
- Department of Radiation Oncology, Miami Cancer Institute, Baptist Health South Florida, Miami, FL; Herbert Wertheim College of Medicine, Florida International University, Miami, FL
| | - Italo Linfante
- Miami Neuroscience Institute, Baptist Health South Florida, Miami, FL; Miami Cardiac & Vascular Institute, Baptist Health South Florida, Miami, FL; Herbert Wertheim College of Medicine, Florida International University, Miami, FL
| | - D Jay Wieczorek
- Department of Radiation Oncology, Miami Cancer Institute, Baptist Health South Florida, Miami, FL; Herbert Wertheim College of Medicine, Florida International University, Miami, FL
| | - Alonso N Gutierrez
- Department of Radiation Oncology, Miami Cancer Institute, Baptist Health South Florida, Miami, FL; Herbert Wertheim College of Medicine, Florida International University, Miami, FL
| | - John G Candela
- Miami Neuroscience Institute, Baptist Health South Florida, Miami, FL
| | - Michael W McDermott
- Miami Neuroscience Institute, Baptist Health South Florida, Miami, FL; Herbert Wertheim College of Medicine, Florida International University, Miami, FL
| |
Collapse
|
3
|
M M, A YJ, A M. Medical Image Fusion using bi-dimensional empirical mode decomposition (BEMD) and an Efficient Fusion Scheme. J Biomed Phys Eng 2020; 10:727-736. [PMID: 33364210 PMCID: PMC7753264 DOI: 10.31661/jbpe.v0i0.830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2017] [Accepted: 04/22/2018] [Indexed: 11/16/2022]
Abstract
Background Medical image fusion is being widely used for capturing complimentary information from images of different modalities. Combination of useful information presented in medical images is the aim of image fusion techniques, and the fused image will exhibit more information in comparison with source images. Objective In the current study, a BEMD-based multi-modal medical image fusion technique is utilized. Moreover, Teager-Kaiser energy operator (TKEO) was applied to lower BIMFs. The results were compared to six routine methods. Material and Methods In this study, which is of experimental type, an image fusion technique using bi-dimensional empirical mode decomposition (BEMD), Teager-Kaiser energy operator (TKEO) as a local feature selection and Hierarchical Model And X (HMAX) model is presented. BEMD fusion technique can preserve much functional information. In the process of fusion, we adopt the fusion rule of TKEO for lower bi-dimensional intrinsic mode functions (BIMFs) of two images and HMAX visual cortex model as a fusion rule for higher BIMFs, which are verified to be more appropriate for human vision system. Integrating BEMD and this efficient fusion scheme can retain more spatial and functional features of input images. Results We compared our method with IHS, DWT, LWT, PCA, NSCT and SIST methods. The simulation results and fusion performance show that the presented method is effective in terms of mutual information, quality of fused image (QAB/F), standard deviation, peak signal to noise ratio, structural similarity and considerably better results compared to six typical fusion methods. Conclusion The statistical analyses revealed that our algorithm significantly improved spatial features and diminished the color distortion compared to other fusion techniques. The proposed approach can be used for routine practice. Fusion of functional and morphological medical images is possible before, during and after treatment of tumors in different organs. Image fusion can enable interventional events and can be further assessed.
Collapse
Affiliation(s)
- Mozaffarilegha M
- PhD, Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Yaghobi Joybari A
- MD, Department of Radiation Oncology, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mostaar A
- PhD, Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- PhD, Radiation Biology Research Center, Iran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
4
|
Dey N, Li S, Bermond K, Heintzmann R, Curcio CA, Ach T, Gerig G. Multi-modal Image Fusion for Multispectral Super-resolution in Microscopy. Proc SPIE Int Soc Opt Eng 2019; 10949. [PMID: 31777411 DOI: 10.1117/12.2512598] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Spectral imaging is a ubiquitous tool in modern biochemistry. Despite acquiring dozens to thousands of spectral channels, existing technology cannot capture spectral images at the same spatial resolution as structural microscopy. Due to partial voluming and low light exposure, spectral images are often difficult to interpret and analyze. This highlights a need to upsample the low-resolution spectral image by using spatial information contained in the high-resolution image, thereby creating a fused representation with high specificity both spatially and spectrally. In this paper, we propose a framework for the fusion of co-registered structural and spectral microscopy images to create super-resolved representations of spectral images. As a first application, we super-resolve spectral images of retinal tissue imaged with confocal laser scanning microscopy, by using spatial information from structured illumination microscopy. Second, we super-resolve mass spectroscopic images of mouse brain tissue, by using spatial information from high-resolution histology images. We present a systematic validation of model assumptions crucial towards maintaining the original nature of spectra and the applicability of super-resolution. Goodness-of-fit for spectral predictions are evaluated through functional R 2 values, and the spatial quality of the super-resolved images are evaluated using normalized mutual information.
Collapse
Affiliation(s)
- Neel Dey
- Department of Computer Science and Engineering, New York University Tandon School of Engineering, NY, USA
| | - Shijie Li
- Department of Computer Science and Engineering, New York University Tandon School of Engineering, NY, USA
| | - Katharina Bermond
- Department of Ophthalmology, University Hospital Würzburg, Würzburg, Germany
| | - Rainer Heintzmann
- Institute of Physical Chemistry, Friedrich-Schiller-University Jena, Germany.,Leibniz Institute of Photonic Technology (IPHT), Jena, Germany
| | - Christine A Curcio
- Department of Ophthalmology and Visual Sciences, University of Alabama at Birmingham, AL, USA
| | - Thomas Ach
- Department of Ophthalmology, University Hospital Würzburg, Würzburg, Germany
| | - Guido Gerig
- Department of Computer Science and Engineering, New York University Tandon School of Engineering, NY, USA
| |
Collapse
|
5
|
Midulla M, Pescatori L, Chevallier O, Nakai M, Ikoma A, Gehin S, Berthod PE, Ne R, Loffroy R, Dake M. Future of IR: Emerging Techniques, Looking to the Future…and Learning from the Past. J Belg Soc Radiol 2019; 103:12. [PMID: 30828696 DOI: 10.5334/jbsr.1727] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Innovation has been the cornerstone of interventional radiology since the early years of the founders, with a multitude of new therapeutic approaches developed over the last 50 years. What is the future holding for us? This article presents an overview of the in-coming developments that are catching on at this moment, particularly focusing on three items: the new applications of existing techniques, particularly embolotherapy and interventional oncology; the cutting-edge devices; the imaging technologies at the forefront of the image-guidance. Besides this, clinical vision and patient relation remain crucial for the future of the discipline.
Collapse
|
6
|
Qiu X, Liu W, Zhang M, Lin H, Zhou S, Lei Y, Xia J. Application of Virtual Navigation with Multimodality Image Fusion in Foramen Ovale Cannulation. Pain Med 2018; 18:2181-2186. [PMID: 28340174 DOI: 10.1093/pm/pnx017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Objective Idiopathic trigeminal neuralgia (ITN) can be effectively treated with radiofrequency thermocoagulation. However, this procedure requires cannulation of the foramen ovale, and conventional cannulation methods are associated with high failure rates. Multimodality imaging can improve the accuracy of cannulation because each imaging method can compensate for the drawbacks of the other. We aim to determine the feasibility and accuracy of percutaneous foramen ovale cannulation under the guidance of virtual navigation with multimodality image fusion in a self-designed anatomical model of human cadaveric heads. Design Five cadaveric head specimens were investigated in this study. Spiral computed tomography (CT) scanning clearly displayed the foramen ovale in all five specimens (10 foramina), which could not be visualized using two-dimensional ultrasound alone. The ultrasound and spiral CT images were fused, and percutaneous cannulation of the foramen ovale was performed under virtual navigation. After this, spiral CT scanning was immediately repeated to confirm the accuracy of the cannulation. Results Postprocedural spiral CT confirmed that the ultrasound and CT images had been successfully fused for all 10 foramina, which were accurately and successfully cannulated. The success rates of both image fusion and cannulation were 100%. Conclusions Virtual navigation with multimodality image fusion can substantially facilitate foramen ovale cannulation and is worthy of clinical application.
Collapse
Affiliation(s)
| | | | | | - Hengzhou Lin
- Neurosurgery, the Second People's Hospital of Shenzhen, Shenzhen University 1st Affiliated Hospital, Shenzhen, Guangdong, China
| | - Shoujun Zhou
- Shenzhen institutes of advanced technology, Chinese academy of sciences
| | | | | |
Collapse
|
7
|
Chen S, Su H, Zhang R, Tian J, Yang L. The Tradeoff Analysis for Remote Sensing Image Fusion Using Expanded Spectral Angle Mapper. Sensors (Basel) 2008; 8:520-8. [PMID: 27879720 DOI: 10.3390/s8010520] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2007] [Accepted: 01/23/2008] [Indexed: 12/02/2022]
Abstract
Image fusion is a useful tool in integrating a high-resolution panchromatic image (HRPI) with a low-resolution multispectral image (LRMI) to produce a high-resolution multispectral image (HRMI). To date, many image fusion techniques have been developed to try to improve the spatial resolution of the LRMI to that of the HRPI with its spectral property reliably preserved. However, many studies have indicated that there exists a trade- off between the spatial resolution improvement and the spectral property preservation of the LRMI, and it is difficult for the existing methods to do the best in both aspects. Based on one minimization problem, this paper mathematically analyzes the tradeoff in fusing remote sensing images. In experiment, four fusion methods are evaluated through expanded spectral angle mapper (ESAM). Results clearly prove that all the tested methods have this property.
Collapse
|