1
|
Lin P, Li C, Flores-Valle A, Wang Z, Zhang M, Cheng R, Cheng JX. Tilt-angle stimulated Raman projection tomography. OPTICS EXPRESS 2022; 30:37112-37123. [PMID: 36258628 PMCID: PMC9662602 DOI: 10.1364/oe.470527] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 09/08/2022] [Accepted: 09/09/2022] [Indexed: 06/02/2023]
Abstract
Stimulated Raman projection tomography is a label-free volumetric chemical imaging technology allowing three-dimensional (3D) reconstruction of chemical distribution in a biological sample from the angle-dependent stimulated Raman scattering projection images. However, the projection image acquisition process requires rotating the sample contained in a capillary glass held by a complicated sample rotation stage, limiting the volumetric imaging speed, and inhibiting the study of living samples. Here, we report a tilt-angle stimulated Raman projection tomography (TSPRT) system which acquires angle-dependent projection images by utilizing tilt-angle beams to image the sample from different azimuth angles sequentially. The TSRPT system, which is free of sample rotation, enables rapid scanning of different views by a tailor-designed four-galvo-mirror scanning system. We present the design of the optical system, the theory, and calibration procedure for chemical tomographic reconstruction. 3D vibrational images of polystyrene beads and C. elegans are demonstrated in the C-H vibrational region.
Collapse
Affiliation(s)
- Peng Lin
- Department of Electrical and Computer Engineering, Boston University, 8 St. Mary’s St., Boston, MA 02215, USA
| | - Chuan Li
- Department of Electrical and Computer Engineering, Boston University, 8 St. Mary’s St., Boston, MA 02215, USA
| | - Andres Flores-Valle
- Max Planck Institute for Neurobiology of Behavior–caesar (MPINB), Bonn, Germany, Bonn 53175, Germany
| | - Zian Wang
- Department of Biomedical Engineering, Boston University, 44 Cummington Mall, Boston University, MA 02215, USA
| | - Meng Zhang
- Department of Electrical and Computer Engineering, Boston University, 8 St. Mary’s St., Boston, MA 02215, USA
| | - Ran Cheng
- Department of Chemistry,
Boston University, 590 Commonwealth Ave, Boston University, Boston, MA 02215, USA
| | - Ji-Xin Cheng
- Department of Electrical and Computer Engineering, Boston University, 8 St. Mary’s St., Boston, MA 02215, USA
- Department of Biomedical Engineering, Boston University, 44 Cummington Mall, Boston University, MA 02215, USA
- Department of Chemistry,
Boston University, 590 Commonwealth Ave, Boston University, Boston, MA 02215, USA
- Photonics Center,
Boston University, 8 St. Mary’s St., Boston, MA 02215, USA
| |
Collapse
|
2
|
Wang H, Wang N, Xie H, Wang L, Zhou W, Yang D, Cao X, Zhu S, Liang J, Chen X. Two-stage deep learning network-based few-view image reconstruction for parallel-beam projection tomography. Quant Imaging Med Surg 2022; 12:2535-2551. [PMID: 35371942 PMCID: PMC8923870 DOI: 10.21037/qims-21-778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 12/20/2021] [Indexed: 08/30/2023]
Abstract
BACKGROUND Projection tomography (PT) is a very important and valuable method for fast volumetric imaging with isotropic spatial resolution. Sparse-view or limited-angle reconstruction-based PT can greatly reduce data acquisition time, lower radiation doses, and simplify sample fixation modes. However, few techniques can currently achieve image reconstruction based on few-view projection data, which is especially important for in vivo PT in living organisms. METHODS A 2-stage deep learning network (TSDLN)-based framework was proposed for parallel-beam PT reconstructions using few-view projections. The framework is composed of a reconstruction network (R-net) and a correction network (C-net). The R-net is a generative adversarial network (GAN) used to complete image information with direct back-projection (BP) of a sparse signal, bringing the reconstructed image close to reconstruction results obtained from fully projected data. The C-net is a U-net array that denoises the compensation result to obtain a high-quality reconstructed image. RESULTS The accuracy and feasibility of the proposed TSDLN-based framework in few-view projection-based reconstruction were first evaluated with simulations, using images from the DeepLesion public dataset. The framework exhibited better reconstruction performance than traditional analytic reconstruction algorithms and iterative algorithms, especially in cases using sparse-view projection images. For example, with as few as two projections, the TSDLN-based framework reconstructed high-quality images very close to the original image, with structural similarities greater than 0.8. By using previously acquired optical PT (OPT) data in the TSDLN-based framework trained on computed tomography (CT) data, we further exemplified the migration capabilities of the TSDLN-based framework. The results showed that when the number of projections was reduced to 5, the contours and distribution information of the samples in question could still be seen in the reconstructed images. CONCLUSIONS The simulations and experimental results showed that the TSDLN-based framework has strong reconstruction abilities using few-view projection images, and has great potential in the application of in vivo PT.
Collapse
Affiliation(s)
- Huiyuan Wang
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Nan Wang
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Hui Xie
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Lin Wang
- School of Computer Science and Engineering, Xi’an University of Technology, Xi’an, China
| | - Wangting Zhou
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Defu Yang
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China
| | - Xu Cao
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Shouping Zhu
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Jimin Liang
- School of Electronic Engineering, Xidian University, Xi’an, China
| | - Xueli Chen
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| |
Collapse
|