1
|
Piol A, Sanderson D, del Cerro CF, Lorente-Mur A, Desco M, Abella M. Hybrid Reconstruction Approach for Polychromatic Computed Tomography in Highly Limited-Data Scenarios. SENSORS (BASEL, SWITZERLAND) 2024; 24:6782. [PMID: 39517679 PMCID: PMC11548251 DOI: 10.3390/s24216782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 10/10/2024] [Accepted: 10/15/2024] [Indexed: 11/16/2024]
Abstract
Conventional strategies aimed at mitigating beam-hardening artifacts in computed tomography (CT) can be categorized into two main approaches: (1) postprocessing following conventional reconstruction and (2) iterative reconstruction incorporating a beam-hardening model. While the former fails in low-dose and/or limited-data cases, the latter substantially increases computational cost. Although deep learning-based methods have been proposed for several cases of limited-data CT, few works in the literature have dealt with beam-hardening artifacts, and none have addressed the problems caused by randomly selected projections and a highly limited span. We propose the deep learning-based prior image constrained (PICDL) framework, a hybrid method used to yield CT images free from beam-hardening artifacts in different limited-data scenarios based on the combination of a modified version of the Prior Image Constrained Compressed Sensing (PICCS) algorithm that incorporates the L2 norm (L2-PICCS) with a prior image generated from a preliminary FDK reconstruction with a deep learning (DL) algorithm. The model is based on a modification of the U-Net architecture, incorporating ResNet-34 as a replacement of the original encoder. Evaluation with rodent head studies in a small-animal CT scanner showed that the proposed method was able to correct beam-hardening artifacts, recover patient contours, and compensate streak and deformation artifacts in scenarios with a limited span and a limited number of projections randomly selected. Hallucinations present in the prior image caused by the deep learning model were eliminated, while the target information was effectively recovered by the L2-PICCS algorithm.
Collapse
Affiliation(s)
- Alessandro Piol
- Bioengineering Department, Universidad Carlos III de Madrid, 28911 Leganes, Spain or (A.P.); (D.S.); (C.F.d.C.); (A.L.-M.)
- Department of Information Engineering, University of Brescia, Via Branze, 38, 25123 Brescia, Italy
| | - Daniel Sanderson
- Bioengineering Department, Universidad Carlos III de Madrid, 28911 Leganes, Spain or (A.P.); (D.S.); (C.F.d.C.); (A.L.-M.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain
| | - Carlos F. del Cerro
- Bioengineering Department, Universidad Carlos III de Madrid, 28911 Leganes, Spain or (A.P.); (D.S.); (C.F.d.C.); (A.L.-M.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain
| | - Antonio Lorente-Mur
- Bioengineering Department, Universidad Carlos III de Madrid, 28911 Leganes, Spain or (A.P.); (D.S.); (C.F.d.C.); (A.L.-M.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain
| | - Manuel Desco
- Bioengineering Department, Universidad Carlos III de Madrid, 28911 Leganes, Spain or (A.P.); (D.S.); (C.F.d.C.); (A.L.-M.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain
- Centro Nacional de Investigaciones Cardiovasculares Carlos III (CNIC), 28029 Madrid, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), 28029 Madrid, Spain
| | - Mónica Abella
- Bioengineering Department, Universidad Carlos III de Madrid, 28911 Leganes, Spain or (A.P.); (D.S.); (C.F.d.C.); (A.L.-M.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain
- Centro Nacional de Investigaciones Cardiovasculares Carlos III (CNIC), 28029 Madrid, Spain
| |
Collapse
|
2
|
Zhang Z, Li C, Wang W, Dong Z, Liu G, Dong Y, Zhang Y. Towards full-stack deep learning-empowered data processing pipeline for synchrotron tomography experiments. Innovation (N Y) 2024; 5:100539. [PMID: 38089566 PMCID: PMC10711238 DOI: 10.1016/j.xinn.2023.100539] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 11/13/2023] [Indexed: 10/16/2024] Open
Abstract
Synchrotron tomography experiments are transitioning into multifunctional, cross-scale, and dynamic characterizations, enabled by new-generation synchrotron light sources and fast developments in beamline instrumentation. However, with the spatial and temporal resolving power entering a new era, this transition generates vast amounts of data, which imposes a significant burden on the data processing end. Today, as a highly accurate and efficient data processing method, deep learning shows great potential to address the big data challenge being encountered at future synchrotron beamlines. In this review, we discuss recent advances employing deep learning at different stages of the synchrotron tomography data processing pipeline. We also highlight how applications in other data-intensive fields, such as medical imaging and electron tomography, can be migrated to synchrotron tomography. Finally, we provide our thoughts on possible challenges and opportunities as well as the outlook, envisioning selected deep learning methods, curated big models, and customized learning strategies, all through an intelligent scheduling solution.
Collapse
Affiliation(s)
- Zhen Zhang
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230029, China
| | - Chun Li
- Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
| | - Wenhui Wang
- Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
| | - Zheng Dong
- Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
| | - Gongfa Liu
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230029, China
| | - Yuhui Dong
- Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
| | - Yi Zhang
- Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
3
|
Guo Z, Liu Z, Barbastathis G, Zhang Q, Glinsky ME, Alpert BK, Levine ZH. Noise-resilient deep learning for integrated circuit tomography. OPTICS EXPRESS 2023; 31:15355-15371. [PMID: 37157639 DOI: 10.1364/oe.486213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
X-ray tomography is a non-destructive imaging technique that reveals the interior of an object from its projections at different angles. Under sparse-view and low-photon sampling, regularization priors are required to retrieve a high-fidelity reconstruction. Recently, deep learning has been used in X-ray tomography. The prior learned from training data replaces the general-purpose priors in iterative algorithms, achieving high-quality reconstructions with a neural network. Previous studies typically assume the noise statistics of test data are acquired a priori from training data, leaving the network susceptible to a change in the noise characteristics under practical imaging conditions. In this work, we propose a noise-resilient deep-reconstruction algorithm and apply it to integrated circuit tomography. By training the network with regularized reconstructions from a conventional algorithm, the learned prior shows strong noise resilience without the need for additional training with noisy examples, and allows us to obtain acceptable reconstructions with fewer photons in test data. The advantages of our framework may further enable low-photon tomographic imaging where long acquisition times limit the ability to acquire a large training set.
Collapse
|
4
|
Chen C, Xing Y, Gao H, Zhang L, Chen Z. Sam's Net: A Self-Augmented Multistage Deep-Learning Network for End-to-End Reconstruction of Limited Angle CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2912-2924. [PMID: 35576423 DOI: 10.1109/tmi.2022.3175529] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Limited angle reconstruction is a typical ill-posed problem in computed tomography (CT). Given incomplete projection data, images reconstructed by conventional analytical algorithms and iterative methods suffer from severe structural distortions and artifacts. In this paper, we proposed a self-augmented multi-stage deep-learning network (Sam's Net) for end-to-end reconstruction of limited angle CT. With the merit of the alternating minimization technique, Sam's Net integrates multi-stage self-constraints into cross-domain optimization to provide additional constraints on the manifold of neural networks. In practice, a sinogram completion network (SCNet) and artifact suppression network (ASNet), together with domain transformation layers constitute the backbone for cross-domain optimization. An online self-augmentation module was designed following the manner defined by alternating minimization, which enables a self-augmented learning procedure and multi-stage inference manner. Besides, a substitution operation was applied as a hard constraint for the solution space based on the data fidelity and a learnable weighting layer was constructed for data consistency refinement. Sam's Net forms a new framework for ill-posed reconstruction problems. In the training phase, the self-augmented procedure guides the optimization into a tightened solution space with enriched diverse data distribution and enhanced data consistency. In the inference phase, multi-stage prediction can improve performance progressively. Extensive experiments with both simulated and practical projections under 90-degree and 120-degree fan-beam configurations validate that Sam's Net can significantly improve the reconstruction quality with high stability and robustness.
Collapse
|
5
|
Guo Z, Song JK, Barbastathis G, Glinsky ME, Vaughan CT, Larson KW, Alpert BK, Levine ZH. Physics-assisted generative adversarial network for X-ray tomography. OPTICS EXPRESS 2022; 30:23238-23259. [PMID: 36225009 DOI: 10.1364/oe.460208] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 05/31/2022] [Indexed: 06/16/2023]
Abstract
X-ray tomography is capable of imaging the interior of objects in three dimensions non-invasively, with applications in biomedical imaging, materials science, electronic inspection, and other fields. The reconstruction process can be an ill-conditioned inverse problem, requiring regularization to obtain satisfactory results. Recently, deep learning has been adopted for tomographic reconstruction. Unlike iterative algorithms which require a distribution that is known a priori, deep reconstruction networks can learn a prior distribution through sampling the training distributions. In this work, we develop a Physics-assisted Generative Adversarial Network (PGAN), a two-step algorithm for tomographic reconstruction. In contrast to previous efforts, our PGAN utilizes maximum-likelihood estimates derived from the measurements to regularize the reconstruction with both known physics and the learned prior. Compared with methods with less physics assisting in training, PGAN can reduce the photon requirement with limited projection angles to achieve a given error rate. The advantages of using a physics-assisted learned prior in X-ray tomography may further enable low-photon nanoscale imaging.
Collapse
|
6
|
Thies M, Wagner F, Huang Y, Gu M, Kling L, Pechmann S, Aust O, Grüneboom A, Schett G, Christiansen S, Maier A. Calibration by differentiation - Self-supervised calibration for X-ray microscopy using a differentiable cone-beam reconstruction operator. J Microsc 2022; 287:81-92. [PMID: 35638174 DOI: 10.1111/jmi.13125] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 04/20/2022] [Accepted: 05/22/2022] [Indexed: 11/28/2022]
Abstract
High-resolution X-ray microscopy (XRM) is gaining interest for biological investigations of extremely small-scale structures. XRM imaging of bones in living mice could provide new insights into the emergence and treatment of osteoporosis by observing osteocyte lacunae, which are holes in the bone of few micrometers in size. Imaging living animals at that resolution, however, is extremely challenging and requires very sophisticated data processing converting the raw XRM detector output into reconstructed images. This paper presents an open-source, differentiable reconstruction pipeline for XRM data which analytically computes the final image from the raw measurements. In contrast to most proprietary reconstruction software, it offers the user full control over each processing step and, additionally, makes the entire pipeline deep learning compatible by ensuring differentiability. This allows fitting trainable modules both before and after the actual reconstruction step in a purely data-driven way using the gradient-based optimizers of common deep learning frameworks. The value of such differentiability is demonstrated by calibrating the parameters of a simple cupping correction module operating on the raw projection images using only a self-supervisory quality metric based on the reconstructed volume and no further calibration measurements. The retrospective calibration directly improves image quality as it avoids cupping artifacts and decreases the difference in gray values between outer and inner bone by 68% to 94%. Furthermore, it makes the reconstruction process entirely independent of the XRM manufacturer and paves the way to explore modern deep learning reconstruction methods for arbitrary XRM and, potentially, other flat-panel CT systems. This exemplifies how differentiable reconstruction can be leveraged in the context of XRM and, hence, is an important step toward the goal of reducing the resolution limit of in-vivo bone imaging to the single micrometer domain. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Mareike Thies
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Fabian Wagner
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Yixing Huang
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Mingxuan Gu
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Lasse Kling
- Institute for Nanotechnology and Correlative Microscopy e.V. INAM, Forchheim, Germany
| | - Sabrina Pechmann
- Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Forchheim, Germany
| | - Oliver Aust
- Department of Internal Medicine 3 - Rheumatology and Immunology, Friedrich-Alexander-Universität Erlangen-Nürnberg and Universitätsklinikum Erlangen, Erlangen, Germany
| | - Anika Grüneboom
- Leibniz Institute for Analytical Sciences ISAS, Dortmund, Germany
| | - Georg Schett
- Department of Internal Medicine 3 - Rheumatology and Immunology, Friedrich-Alexander-Universität Erlangen-Nürnberg and Universitätsklinikum Erlangen, Erlangen, Germany.,Deutsches Zentrum für Immuntherapie, Friedrich-Alexander-Universität Erlangen-Nürnberg and Universitätsklinikum Erlangen, Erlangen, Germany
| | - Silke Christiansen
- Institute for Nanotechnology and Correlative Microscopy e.V. INAM, Forchheim, Germany.,Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Forchheim, Germany.,Physics Department, Freie Universität Berlin, Berlin, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
7
|
Fan F, Kreher B, Keil H, Maier A, Huang Y. Fiducial marker recovery and detection from severely truncated data in navigation assisted spine surgery. Med Phys 2022; 49:2914-2930. [PMID: 35305271 DOI: 10.1002/mp.15617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 02/16/2022] [Accepted: 03/06/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Fiducial markers are commonly used in navigation assisted minimally invasive spine surgery and they help transfer image coordinates into real world coordinates. In practice, these markers might be located outside the field-of-view (FOV) of C-arm cone-beam computed tomography (CBCT) systems used in intraoperative surgeries, due to the limited detector sizes. As a consequence, reconstructed markers in CBCT volumes suffer from artifacts and have distorted shapes, which sets an obstacle for navigation. METHODS In this work, we propose two fiducial marker detection methods: direct detection from distorted markers (direct method) and detection after marker recovery (recovery method). For direct detection from distorted markers in reconstructed volumes, an efficient automatic marker detection method using two neural networks and a conventional circle detection algorithm is proposed. For marker recovery, a task-specific data preparation strategy is proposed to recover markers from severely truncated data. Afterwards, a conventional marker detection algorithm is applied for position detection. The networks in both methods are trained based on simulated data. For the direct method, 6800 images and 10000 images are generated respectively to train the U-Net and ResNet50. For the recovery method, the training set includes 1360 images for FBPConvNet and Pix2pixGAN. The simulated data set with 166 markers and 4 cadaver cases with real fiducials are used for evaluation. RESULTS The two methods are evaluated on simulated data and real cadaver data. The direct method achieves 100% detection rates within 1 mm detection error on simulated data with normal truncation and simulated data with heavier noise, but only detect 94.6% markers in extremely severe truncation case. The recovery method detects all the markers successfully in three test data sets and around 95% markers are detected within 0.5 mm error. For real cadaver data, both methods achieve 100% marker detection rates with mean registration error below 0.2 mm. CONCLUSIONS Our experiments demonstrate that the direct method is capable of detecting distorted markers accurately and the recovery method with the task-specific data preparation strategy has high robustness and generalizability on various data sets. The task-specific data preparation is able to reconstruct structures of interest outside the FOV from severely truncated data better than conventional data preparation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Fuxin Fan
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, 91058, Germany
| | | | - Holger Keil
- Department of Trauma and Orthopedic Surgery, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, 91054, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Yixing Huang
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, 91054, Germany
| |
Collapse
|
8
|
Flenner S, Bruns S, Longo E, Parnell AJ, Stockhausen KE, Müller M, Greving I. Machine learning denoising of high-resolution X-ray nanotomography data. JOURNAL OF SYNCHROTRON RADIATION 2022; 29:230-238. [PMID: 34985440 PMCID: PMC8733986 DOI: 10.1107/s1600577521011139] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 10/23/2021] [Indexed: 05/13/2023]
Abstract
High-resolution X-ray nanotomography is a quantitative tool for investigating specimens from a wide range of research areas. However, the quality of the reconstructed tomogram is often obscured by noise and therefore not suitable for automatic segmentation. Filtering methods are often required for a detailed quantitative analysis. However, most filters induce blurring in the reconstructed tomograms. Here, machine learning (ML) techniques offer a powerful alternative to conventional filtering methods. In this article, we verify that a self-supervised denoising ML technique can be used in a very efficient way for eliminating noise from nanotomography data. The technique presented is applied to high-resolution nanotomography data and compared to conventional filters, such as a median filter and a nonlocal means filter, optimized for tomographic data sets. The ML approach proves to be a very powerful tool that outperforms conventional filters by eliminating noise without blurring relevant structural features, thus enabling efficient quantitative analysis in different scientific fields.
Collapse
Affiliation(s)
- Silja Flenner
- Helmholtz-Zentrum Hereon, Max-Planck-Strasse 1, 21502 Geesthacht, Germany
| | - Stefan Bruns
- Helmholtz-Zentrum Hereon, Max-Planck-Strasse 1, 21502 Geesthacht, Germany
| | - Elena Longo
- Helmholtz-Zentrum Hereon, Max-Planck-Strasse 1, 21502 Geesthacht, Germany
| | - Andrew J. Parnell
- Department of Physics and Astronomy, University of Sheffield, Western Bank, Sheffield S3 7RH, United Kingdom
| | - Kilian E. Stockhausen
- Department of Osteology and Biomechanics, University Medical Center, Lottestrasse 55a, 22529 Hamburg, Germany
| | - Martin Müller
- Helmholtz-Zentrum Hereon, Max-Planck-Strasse 1, 21502 Geesthacht, Germany
| | - Imke Greving
- Helmholtz-Zentrum Hereon, Max-Planck-Strasse 1, 21502 Geesthacht, Germany
| |
Collapse
|
9
|
Huang Y, Preuhs A, Manhart M, Lauritsch G, Maier A. Data Extrapolation From Learned Prior Images for Truncation Correction in Computed Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3042-3053. [PMID: 33844627 DOI: 10.1109/tmi.2021.3072568] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Data truncation is a common problem in computed tomography (CT). Truncation causes cupping artifacts inside the field-of-view (FOV) and anatomical structures missing outside the FOV. Deep learning has achieved impressive results in CT reconstruction from limited data. However, its robustness is still a concern for clinical applications. Although the image quality of learning-based compensation schemes may be inadequate for clinical diagnosis, they can provide prior information for more accurate extrapolation than conventional heuristic extrapolation methods. With extrapolated projection, a conventional image reconstruction algorithm can be applied to obtain a final reconstruction. In this work, a general plug-and-play (PnP) method for truncation correction is proposed based on this idea, where various deep learning methods and conventional reconstruction algorithms can be plugged in. Such a PnP method integrates data consistency for measured data and learned prior image information for truncated data. This shows to have better robustness and interpretability than deep learning only. To demonstrate the efficacy of the proposed PnP method, two state-of-the-art deep learning methods, FBPConvNet and Pix2pixGAN, are investigated for truncation correction in cone-beam CT in noise-free and noisy cases. Their robustness is evaluated by showing false negative and false positive lesion cases. With our proposed PnP method, false lesion structures are corrected for both deep learning methods. For FBPConvNet, the root-mean-square error (RMSE) inside the FOV can be improved from 92HU to around 30HU by PnP in the noisy case. Pix2pixGAN solely achieves better image quality than FBPConvNet solely for truncation correction in general. PnP further improves the RMSE inside the FOV from 42HU to around 27HU for Pix2pixGAN. The efficacy of PnP is also demonstrated on real clinical head data.
Collapse
|
10
|
A Survey of Soft Computing Approaches in Biomedical Imaging. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:1563844. [PMID: 34394885 PMCID: PMC8356006 DOI: 10.1155/2021/1563844] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 07/11/2021] [Accepted: 07/21/2021] [Indexed: 12/11/2022]
Abstract
Medical imaging is an essential technique for the diagnosis and treatment of diseases in modern clinics. Soft computing plays a major role in the recent advances in medical imaging. It handles uncertainties and improves the qualities of an image. Until now, various soft computing approaches have been proposed for medical applications. This paper discusses various medical imaging modalities and presents a short review of soft computing approaches such as fuzzy logic, artificial neural network, genetic algorithm, machine learning, and deep learning. We also studied and compared each approach used for other imaging modalities based on the certain parameter used for the system evaluation. Finally, based on comparative analysis, the possible research strategies for further development are proposed. As far as we know, no previous work examined this issue.
Collapse
|
11
|
Wang J, Li M, Cheng J, Guo Z, Li D, Wu S. Exact reconstruction condition for angle-limited computed tomography of chemiluminescence. APPLIED OPTICS 2021; 60:4273-4281. [PMID: 34143113 DOI: 10.1364/ao.420223] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Accepted: 04/21/2021] [Indexed: 06/12/2023]
Abstract
Computed tomography of chemiluminescence (CTC) is an effective technique for three-dimensional (3D) combustion diagnostics. It reconstructs the 3D concentrations of intermediate species or 3D images of flame topology by multiple chemiluminescence projections captured from different perspectives. In the previous studies of CTC systems, it was assumed that projections from arbitrary perspectives are available. However, for some practical applications, the range of view angles and the number of projections might be restricted due to the optical access limitation, greatly affecting the reconstruction quality. In this paper, the exact reconstruction condition for angle-limited computed tomography of chemiluminescence was studied based on Mojette transform theories, and it was demonstrated by numerical simulations and experiments. The studies indicate that the object tested within limited angles can be well reconstructed when the number of grids, the number of projections, and the sampling rate of projections satisfy the exact reconstruction condition. By increasing the sampling rate of projections, high-quality tomographic reconstruction can be achieved by a few projections in a small angle range. Although this technique is discussed under combustion diagnostics, it can also be used and adapted for other tomography methods.
Collapse
|
12
|
Kalinin SV, Zhang S, Valleti M, Pyles H, Baker D, De Yoreo JJ, Ziatdinov M. Disentangling Rotational Dynamics and Ordering Transitions in a System of Self-Organizing Protein Nanorods via Rotationally Invariant Latent Representations. ACS NANO 2021; 15:6471-6480. [PMID: 33861068 DOI: 10.1021/acsnano.0c08914] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The dynamics of complex ordering systems with active rotational degrees of freedom exemplified by protein self-assembly is explored using a machine learning workflow that combines deep learning-based semantic segmentation and rotationally invariant variational autoencoder-based analysis of orientation and shape evolution. The latter allows for disentanglement of the particle orientation from other degrees of freedom and compensates for lateral shifts. The disentangled representations in the latent space encode the rich spectrum of local transitions that can now be visualized and explored via continuous variables. The time dependence of ensemble averages allows insight into the time dynamics of the system and, in particular, illustrates the presence of the potential ordering transition. Finally, analysis of the latent variables along the single-particle trajectory allows tracing these parameters on a single-particle level. The proposed approach is expected to be universally applicable for the description of the imaging data in optical, scanning probe, and electron microscopy seeking to understand the dynamics of complex systems where rotations are a significant part of the process.
Collapse
Affiliation(s)
- Sergei V Kalinin
- Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, United States
| | - Shuai Zhang
- Materials Science and Engineering, University of Washington, Seattle, Washington 98195, United States
- Physical Sciences Division, Pacific Northwest National Laboratory, Richland, Washington 99354, United States
| | - Mani Valleti
- Bredesen Center for Interdisciplinary Research, University of Tennessee, Knoxville, Tennessee 37996, United States
| | - Harley Pyles
- Department of Biochemistry, University of Washington, Seattle, Washington 98195, United States
- Institute for Protein Design, University of Washington, Seattle, Washington 98195, United States
| | - David Baker
- Department of Biochemistry, University of Washington, Seattle, Washington 98195, United States
- Institute for Protein Design, University of Washington, Seattle, Washington 98195, United States
- Howard Hughes Medical Institute, University of Washington, Seattle, Washington 98195, United States
| | - James J De Yoreo
- Materials Science and Engineering, University of Washington, Seattle, Washington 98195, United States
- Physical Sciences Division, Pacific Northwest National Laboratory, Richland, Washington 99354, United States
| | - Maxim Ziatdinov
- Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, United States
- Computational Sciences and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, United States
| |
Collapse
|
13
|
Abstract
The potential for convolutional neural networks to provide real-time imaging capabilities for coherent diffraction imaging experiments at XFELs is discussed.
Collapse
Affiliation(s)
- Ross Harder
- Advanced Photon Source, Argonne National Laboratory, Argonne, IL 60439, USA
| |
Collapse
|
14
|
Zhang T, Zhang L, Chen Z, Xing Y, Gao H. Fourier Properties of Symmetric-Geometry Computed Tomography and Its Linogram Reconstruction With Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4445-4457. [PMID: 32866095 DOI: 10.1109/tmi.2020.3020720] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In this work, we investigate the Fourier properties of a symmetric-geometry computed tomography (SGCT) with linearly distributed source and detector in a stationary configuration. A linkage between the 1D Fourier Transform of a weighted projection from SGCT and the 2D Fourier Transform of a deformed object is established in a simple mathematical form (i.e., the Fourier slice theorem for SGCT). Based on its Fourier slice theorem and its unique data sampling in the Fourier space, a Linogram-based Fourier reconstruction method is derived for SGCT. We demonstrate that the entire Linogram reconstruction process can be embedded as known operators into an end-to-end neural network. As a learning-based approach, the proposed Linogram-Net has capability of improving CT image quality for non-ideal imaging scenarios, a limited-angle SGCT for instance, through combining weights learning in the projection domain and loss minimization in the image domain. Numerical simulations and physical experiments on an SGCT prototype platform showed that our proposed Linogram-based method can achieve accurate reconstruction from a dual-SGCT scan and can greatly reduce computational complexity when compared with the filtered backprojection type reconstruction. The Linogram-Net achieved accurate reconstruction when projection data are complete and significantly suppressed image artifacts from a limited-angle SGCT scan mimicked by using a clinical CT dataset, with the average CT number error in the selected regions of interest reduced from 67.7 Hounsfield Units (HU) to 28.7 HU, and the average normalized mean square error of overall images reduced from 4.21e-3 to 2.65e-3.
Collapse
|