1
|
Paganin DM, Morgan KS. X-ray Fokker-Planck equation for paraxial imaging. Sci Rep 2019; 9:17537. [PMID: 31772186 PMCID: PMC6879762 DOI: 10.1038/s41598-019-52284-5] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Accepted: 10/15/2019] [Indexed: 11/08/2022] Open
Abstract
The Fokker-Planck equation can be used in a partially-coherent imaging context to model the evolution of the intensity of a paraxial x-ray wave field with propagation. This forms a natural generalisation of the transport-of-intensity equation. The x-ray Fokker-Planck equation can simultaneously account for both propagation-based phase contrast, and the diffusive effects of sample-induced small-angle x-ray scattering, when forming an x-ray image of a thin sample. Two derivations are given for the Fokker-Planck equation associated with x-ray imaging, together with a Kramers-Moyal generalisation thereof. Both equations are underpinned by the concept of unresolved speckle due to unresolved sample micro-structure. These equations may be applied to the forward problem of modelling image formation in the presence of both coherent and diffusive energy transport. They may also be used to formulate associated inverse problems of retrieving the phase shifts due to a sample placed in an x-ray beam, together with the diffusive properties of the sample. The domain of applicability for the Fokker-Planck and Kramers-Moyal equations for paraxial imaging is at least as broad as that of the transport-of-intensity equation which they generalise, hence the technique is also expected to be useful for paraxial imaging using visible light, electrons and neutrons.
Collapse
Affiliation(s)
- David M Paganin
- School of Physics and Astronomy, Monash University, Clayton, Victoria, 3800, Australia.
| | - Kaye S Morgan
- School of Physics and Astronomy, Monash University, Clayton, Victoria, 3800, Australia
- Chair of Biomedical Physics, Department of Physics, Munich School of Bioengineering, and Institute of Advanced Study, Technische Universität München, 85748, Garching, Germany
| |
Collapse
|
2
|
Jia MJ, Bruza P, Jarvis LA, Gladstone DJ, Pogue BW. Multi-beam scan analysis with a clinical LINAC for high resolution Cherenkov-excited molecular luminescence imaging in tissue. BIOMEDICAL OPTICS EXPRESS 2018; 9:4217-4234. [PMID: 30615721 PMCID: PMC6157777 DOI: 10.1364/boe.9.004217] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Revised: 07/16/2018] [Accepted: 08/06/2018] [Indexed: 05/22/2023]
Abstract
Cherenkov-excited luminescence scanned imaging (CELSI) is achieved with external beam radiotherapy to map out molecular luminescence intensity or lifetime in tissue. Just as in fluorescence microscopy, the choice of excitation geometry can affect the imaging time, spatial resolution and contrast recovered. In this study, the use of spatially patterned illumination was systematically studied comparing scan shapes, starting with line scan and block patterns and increasing from single beams to multiple parallel beams and then to clinically used treatment plans for radiation therapy. The image recovery was improved by a spatial-temporal modulation-demodulation method, which used the ability to capture simultaneous images of the excitation Cherenkov beam shape to deconvolve the CELSI images. Experimental studies used the multi-leaf collimator on a clinical linear accelerator (LINAC) to create the scanning patterns, and image resolution and contrast recovery were tested at different depths of tissue phantom material. As hypothesized, the smallest illumination squares achieved optimal resolution, but at the cost of lower signal and slower imaging time. Having larger excitation blocks provided superior signal but at the cost of increased radiation dose and lower resolution. Increasing the scan beams to multiple block patterns improved the performance in terms of image fidelity, lower radiation dose and faster acquisition. The spatial resolution was mostly dependent upon pixel area with an optimized side length near 38mm and a beam scan pitch of P = 0.33, and the achievable imaging depth was increased from 14mm to 18mm with sufficient resolving power for 1mm sized test objects. As a proof-of-concept, in-vivo tumor mouse imaging was performed to show 3D rendering and quantification of tissue pO2 with values of 5.6mmHg in a tumor and 77mmHg in normal tissue.
Collapse
Affiliation(s)
- Mengyu Jeremy Jia
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA
| | - Petr Bruza
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA
| | - Lesley A. Jarvis
- Department of Medicine, Geisel School of Medicine, Dartmouth College, Hanover, NH 03755, USA
| | - David J. Gladstone
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA
- Norris Cotton Center, Dartmouth-Hitchcock Medical Center, Lebanon, NH 03756, USA
- Department of Medicine, Geisel School of Medicine, Dartmouth College, Hanover, NH 03755, USA
| | - Brian W. Pogue
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA
- Norris Cotton Center, Dartmouth-Hitchcock Medical Center, Lebanon, NH 03756, USA
| |
Collapse
|
3
|
Liu S, Zhou F, Liao Q. Defocus Map Estimation From a Single Image Based on Two-Parameter Defocus Model. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:5943-5956. [PMID: 28113397 DOI: 10.1109/tip.2016.2617460] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Defocus map estimation (DME) is highly important in many computer vision applications. Nearly, all existing approaches for DME from a single image are based on a one-parameter defocus model, which does not allow for the variation of depth over edges. In this paper, a novel two-parameter model of defocused edges is proposed for DME from a single image. We can estimate the defocus amounts for each side of the edges through this proposed model, and the confidence that the edge is a pattern edge, where the depth remains the same over the edge, can be generated. Then, we modify the TV-L1 algorithm for structure-texture decomposition by taking advantage of this confidence to eliminate pattern edges while preserving structural ones. Finally, the defocus amounts estimated at the edge positions are used as initial values, and the structure component is employed as a guidance in the following Laplacian matting procedure to avoid the influence of pattern edges on the final defocus map. Experiment results show that the proposed method can effectively eliminate the influence of pattern edges compared with the state-of-art method. Furthermore, the estimated defocus map is feasible in applications of depth estimation and foreground/background segmentation.
Collapse
|
4
|
Kodama K, Kubota A. Efficient reconstruction of all-in-focus images through shifted pinholes from multi-focus images for dense light field synthesis and rendering. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:4407-4421. [PMID: 24048015 DOI: 10.1109/tip.2013.2273668] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Scene refocusing beyond extended depth of field for users to observe objects effectively is aimed by researchers in computational photography, microscopic imaging, and so on. Ordinary all-in-focus image reconstruction from a sequence of multi-focus images achieves extended depth of field, where reconstructed images would be captured through a pinhole in the center on the lens. In this paper, we propose a novel method for reconstructing all-in-focus images through shifted pinholes on the lens based on 3D frequency analysis of multi-focus images. Such shifted pinhole images are obtained by a linear combination of multi-focus images with scene-independent 2D filters in the frequency domain. The proposed method enables us to efficiently synthesize dense 4D light field on the lens plane for image-based rendering, especially, robust scene refocusing with arbitrary bokeh. Our novel method using simple linear filters achieves not only reconstruction of all-in-focus images even for shifted pinholes more robustly than the conventional methods depending on scene/focus estimation, but also scene refocusing without suffering from limitation of resolution in comparison with recent approaches using special devices such as lens arrays in computational photography.
Collapse
|
5
|
Aydin T, Akgul YS. An occlusion insensitive adaptive focus measurement method. OPTICS EXPRESS 2010; 18:14212-14224. [PMID: 20588555 DOI: 10.1364/oe.18.014212] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
This paper proposes a new focus measurement method for Depth From Focus to recover depth of scenes. The method employs an all-focused image of the scene to address the focus measure ambiguity problem of the existing focus measures in the presence of occlusions. Depth discontinuities are handled effectively by using adaptively shaped and weighted support windows. The size of the support window can be increased conveniently for more robust depth estimation without introducing any window size related Depth From Focus problems. The experiments on the real and synthetically refocused images show that the introduced focus measurement method works effectively and efficiently in real world applications.
Collapse
Affiliation(s)
- Tarkan Aydin
- GIT Vision Lab, Department Of Computer Engineering, Gebze Institute Of Technology, Gebze, Kocaeli, Turkey.
| | | |
Collapse
|
6
|
Yang J, Schonfeld D. Virtual focus and depth estimation from defocused video sequences. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2010; 19:668-679. [PMID: 19933002 DOI: 10.1109/tip.2009.2036708] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
In this paper, we present a novel method for virtual focus and object depth estimation from defocused video captured by a moving camera. We use the term virtual focus to refer to a new approach for producing in-focus image sequences by processing blurred videos captured by out-of-focus cameras. Our method relies on the concept of Depth-from-Defocus (DFD) for virtual focus estimation. However, the proposed approach overcomes limitations of DFD by reformulating the problem in a moving-camera scenario. We introduce the interframe image motion model, from which the relationship between the camera motion and blur characteristics can be formed. This relationship subsequently leads to a new method for blur estimation. We finally rely on the blur estimation to develop the proposed technique for object depth estimation and focused video reconstruction. The proposed approach can be utilized to correct out-of-focus video sequences and can potentially replace the expensive apparatus required for auto-focus adjustments currently employed in many camera devices. The performance of the proposed algorithm is demonstrated through error analysis and computer simulated experiments.
Collapse
Affiliation(s)
- Junlan Yang
- Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, IL 60607-7053, USA.
| | | |
Collapse
|
7
|
Maik V, Cho D, Shin J, Har D, Paik J. Color Shift Model-Based Segmentation and Fusion for Digital Autofocusing. J Imaging Sci Technol 2007. [DOI: 10.2352/j.imagingsci.technol.(2007)51:4(368)] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
|
8
|
Kubota A, Aizawa K, Chen T. Reconstructing dense light field from array of multifocus images for novel view synthesis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:269-79. [PMID: 17283785 DOI: 10.1109/tip.2006.884938] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
This paper presents a novel method for synthesizing a novel view from two sets of differently focused images taken by an aperture camera array for a scene consisting of two approximately constant depths. The proposed method consists of two steps. The first step is a view interpolation to reconstruct an all-in-focus dense light field of the scene. The second step is to synthesize a novel view by a light-field rendering technique from the reconstructed dense light field. The view interpolation in the first step can be achieved simply by linear filters that are designed to shift different object regions separately, without region segmentation. The proposed method can effectively create a dense array of pin-hole cameras (i.e., all-in-focus images), so that the novel view can be synthesized with better quality.
Collapse
Affiliation(s)
- Akira Kubota
- Department of Information Processing, Tokyo Institute of Technology, Yokohama 226-8502, Japan.
| | | | | |
Collapse
|
9
|
Gureyev TE, Nesterets YI, Paganin DM, Wilkins SW. Effects of incident illumination on in-line phase-contrast imaging. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2006; 23:34-42. [PMID: 16478058 DOI: 10.1364/josaa.23.000034] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Effects of incident illumination on phase-contrast images obtained by means of free-space propagation are investigated under the "transport-of-intensity" approximation. Analytical expressions for image intensity distribution are derived in the cases of coherent quasi-plane and quasi-spherical incident waves, as well as for spatially incoherent and quasi-homogeneous sources and some other types of sources. Practical methods for measuring the relevant parameters of the incident radiation are discussed together with formulas allowing one to calculate the effect of these parameters on the image intensity distribution. The results are expected to be useful in quantitative in-line imaging, phase retrieval, and tomography with polychromatic and spatially partially coherent radiation. As an application we present a method for simultaneous "automatic" phase retrieval and spatial deconvolution in in-line imaging of homogeneous objects using extended polychromatic x-ray sources.
Collapse
Affiliation(s)
- Timur E Gureyev
- CSIRO Manufacturing and Infrastructure Technology, PB33, VIC 3169, Australia.
| | | | | | | |
Collapse
|
10
|
Kubota A, Aizawa K. Reconstructing arbitrarily focused images from two differently focused images using linear filters. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2005; 14:1848-59. [PMID: 16279184 DOI: 10.1109/tip.2005.854468] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
We present a novel filtering method for reconstructing an all-in-focus image or an arbitrarily focused image from two images that are focused differently. The method can arbitrarily manipulate the degree of blur of the objects using linear filters without segmentation. The filters are uniquely determined from a linear imaging model in the Fourier domain. An effective and accurate blur estimation method is developed. The simulation results show that the accuracy and computational time of the proposed method are improved compared with the previous iterative method and that the effects of blur estimation error on the quality of the reconstructed image are very small. The method performs well for real images acquired without visible artifacts.
Collapse
Affiliation(s)
- Akira Kubota
- Department of Information Processing, Tokyo Institute of Technology, Yokohama, Kanagawa, Japan.
| | | |
Collapse
|
11
|
Iacoviello D, Lucchetti M. Parametric characterization of the form of the human pupil from blurred noisy images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2005; 77:39-48. [PMID: 15639708 DOI: 10.1016/j.cmpb.2004.09.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2003] [Revised: 09/14/2004] [Accepted: 09/18/2004] [Indexed: 05/24/2023]
Abstract
The fluctuation of the human pupil is an important parameter in order to make non-invasive diagnosis of many different diseases and in several clinical applications. The relevant measurement device, the pupillometer, consists in a CCD camera, which shoots the pupil. We suppose that the measured image is blurred by a Gaussian kernel and corrupted by an additive white noise; moreover an elliptic shape for the pupil is assumed. We here present the extension of a multiscale approach for edge detection to identify some parameters of the pupil: the location of its centre, the length of the semi-axes and the orientation of the corresponding ellipse. The chosen method requires knowledge about the degradation parameters of the assumed model; so we first present a simple but efficient method to determine such quantities for the measured image. Then we apply the edge detection procedure to identify points close to the pupil edge, within a chosen probability. Finally we find the optimal ellipse fitting a suitable subset of the previously detected edge points. Results are presented, with comparisons to other approaches for edge finding.
Collapse
Affiliation(s)
- Daniela Iacoviello
- Department of Computer and Systems Science Antonio Ruberti, University of Rome La Sapienza, Via Eudossiana 18, 00184 Rome, Italy.
| | | |
Collapse
|
12
|
Li S, Kwok JTY, Tsang IWH, Wang Y. Fusing Images With Different Focuses Using Support Vector Machines. ACTA ACUST UNITED AC 2004; 15:1555-61. [PMID: 15565781 DOI: 10.1109/tnn.2004.837780] [Citation(s) in RCA: 99] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Many vision-related processing tasks, such as edge detection, image segmentation and stereo matching, can be performed more easily when all objects in the scene are in good focus. However, in practice, this may not be always feasible as optical lenses, especially those with long focal lengths, only have a limited depth of field. One common approach to recover an everywhere-in-focus image is to use wavelet-based image fusion. First, several source images with different focuses of the same scene are taken and processed with the discrete wavelet transform (DWT). Among these wavelet decompositions, the wavelet coefficient with the largest magnitude is selected at each pixel location. Finally, the fused image can be recovered by performing the inverse DWT. In this paper, we improve this fusion procedure by applying the discrete wavelet frame transform (DWFT) and the support vector machines (SVM). Unlike DWT, DWFT yields a translation-invariant signal representation. Using features extracted from the DWFT coefficients, a SVM is trained to select the source image that has the best focus at each pixel location, and the corresponding DWFT coefficients are then incorporated into the composite wavelet representation. Experimental results show that the proposed method outperforms the traditional approach both visually and quantitatively.
Collapse
Affiliation(s)
- Shutao Li
- College of Electrical and Information Engineering, Hunan University, 410082 Changsha, PROC.
| | | | | | | |
Collapse
|
13
|
Goudail F, Ruch O, Réfrégier P. Deconvolution of several versions of a scene perturbed by different defocus blurs: influence of kernel diameters on restoration quality and on robustness to kernel estimation. APPLIED OPTICS 2000; 39:6602-6612. [PMID: 18354674 DOI: 10.1364/ao.39.006602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
It has been shown many times that using different versions of a scene perturbed with different blurs improved the quality of a restored image compared with using a single blurred image. We focus on large defocus blurs, and we first consider a case in which two different blurring kernels are used. We analyze with numerical simulations the influence of the relative diameter of both kernels on the quality of restoration. We then quantitatively evaluate how the two-kernel approach improves the robustness of restoration to a difference between the kernels used in designing the algorithm and the actual kernels that have perturbed the image. We finally show that using three different kernels may not improve the restoration performance compared with the two-kernel approach but still improves the robustness to kernel estimation.
Collapse
Affiliation(s)
- F Goudail
- Physics and Image Processing Group, Fresnel Institute, Ecole Nationale Supérieure de Physique de Marseille, Domaine Universitaire de Saint-Jérôme, 13397 Marseille Cedex 20, France.
| | | | | |
Collapse
|