1
|
Wali R, Xu H, Cheruiyot C, Saleem HN, Janshoff A, Habeck M, Ebert A. Integrated machine learning and multimodal data fusion for patho-phenotypic feature recognition in iPSC models of dilated cardiomyopathy. Biol Chem 2024; 405:427-439. [PMID: 38651266 DOI: 10.1515/hsz-2024-0023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 03/27/2024] [Indexed: 04/25/2024]
Abstract
Integration of multiple data sources presents a challenge for accurate prediction of molecular patho-phenotypic features in automated analysis of data from human model systems. Here, we applied a machine learning-based data integration to distinguish patho-phenotypic features at the subcellular level for dilated cardiomyopathy (DCM). We employed a human induced pluripotent stem cell-derived cardiomyocyte (iPSC-CM) model of a DCM mutation in the sarcomere protein troponin T (TnT), TnT-R141W, compared to isogenic healthy (WT) control iPSC-CMs. We established a multimodal data fusion (MDF)-based analysis to integrate source datasets for Ca2+ transients, force measurements, and contractility recordings. Data were acquired for three additional layer types, single cells, cell monolayers, and 3D spheroid iPSC-CM models. For data analysis, numerical conversion as well as fusion of data from Ca2+ transients, force measurements, and contractility recordings, a non-negative blind deconvolution (NNBD)-based method was applied. Using an XGBoost algorithm, we found a high prediction accuracy for fused single cell, monolayer, and 3D spheroid iPSC-CM models (≥92 ± 0.08 %), as well as for fused Ca2+ transient, beating force, and contractility models (>96 ± 0.04 %). Integrating MDF and XGBoost provides a highly effective analysis tool for prediction of patho-phenotypic features in complex human disease models such as DCM iPSC-CMs.
Collapse
Affiliation(s)
- Ruheen Wali
- Department of Cardiology and Pneumology, Heart Research Center, University Medical Center, 27177 Göttingen University , Robert-Koch-Strasse 40, D-37075 Göttingen, Germany
- Partner Site Göttingen, DZHK (German Center for Cardiovascular Research), Robert-Koch-Strasse 40, D-37075 Göttingen, Germany
| | - Hang Xu
- Department of Cardiology and Pneumology, Heart Research Center, University Medical Center, 27177 Göttingen University , Robert-Koch-Strasse 40, D-37075 Göttingen, Germany
- Partner Site Göttingen, DZHK (German Center for Cardiovascular Research), Robert-Koch-Strasse 40, D-37075 Göttingen, Germany
| | - Cleophas Cheruiyot
- Department of Cardiology and Pneumology, Heart Research Center, University Medical Center, 27177 Göttingen University , Robert-Koch-Strasse 40, D-37075 Göttingen, Germany
- Partner Site Göttingen, DZHK (German Center for Cardiovascular Research), Robert-Koch-Strasse 40, D-37075 Göttingen, Germany
| | - Hafiza Nosheen Saleem
- Department of Cardiology and Pneumology, Heart Research Center, University Medical Center, 27177 Göttingen University , Robert-Koch-Strasse 40, D-37075 Göttingen, Germany
- Partner Site Göttingen, DZHK (German Center for Cardiovascular Research), Robert-Koch-Strasse 40, D-37075 Göttingen, Germany
| | - Andreas Janshoff
- Institute for Physical Chemistry, Göttingen University, Tammannstraße 6, D-37077 Göttingen, Germany
| | - Michael Habeck
- Microscopic Image Analysis, 39065 Jena University Hospital , Kollegiengasse 10, D-07743 Jena, Germany
| | - Antje Ebert
- Department of Cardiology and Pneumology, Heart Research Center, University Medical Center, 27177 Göttingen University , Robert-Koch-Strasse 40, D-37075 Göttingen, Germany
- Partner Site Göttingen, DZHK (German Center for Cardiovascular Research), Robert-Koch-Strasse 40, D-37075 Göttingen, Germany
| |
Collapse
|
2
|
Huihui Y, Daoliang L, Yingyi C. A state-of-the-art review of image motion deblurring techniques in precision agriculture. Heliyon 2023; 9:e17332. [PMID: 37416671 PMCID: PMC10320030 DOI: 10.1016/j.heliyon.2023.e17332] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Revised: 06/07/2023] [Accepted: 06/14/2023] [Indexed: 07/08/2023] Open
Abstract
Image motion deblurring is a crucial technology in computer vision that has gained significant attention attracted by its outstanding ability for accurate acquisition of motion image information, processing and intelligent decision making, etc. Motion blur has recently been considered as one of the major challenges for applications of computer vision in precision agriculture. Motion blurred images seriously affect the accuracy of information acquisition in precision agriculture scene image such as testing, tracking, and behavior analysis of animals, recognition of plant phenotype, critical characteristics of pests and diseases, etc. On the other hand, the fast motion and irregular deformation of agriculture livings, and motion of image capture device all introduce great challenges for image motion deblurring. Hence, the demand of more efficient image motion deblurring method is rapidly increasing and developing in the applications with dynamic scene. Up till now, some studies have been carried out to address this challenge, e.g., spatial motion blur, multi-scale blur and other types of blur. This paper starts with categorization of causes of image blur in precision agriculture. Then, it gives detail introduction of general-purpose motion deblurring methods and their the strengthen and weakness. Furthermore, these methods are compared for the specific applications in precision agriculture e.g., detection and tracking of livestock animal, harvest sorting and grading, and plant disease detection and phenotyping identification etc. Finally, future research directions are discussed to push forward the research and application of advancing in precision agriculture image motion deblurring field.
Collapse
Affiliation(s)
- Yu Huihui
- School of Information Science & Technology, Beijing Forestry University, Beijing, 100083, PR China
- National Innovation Center for Digital Fishery, Beijing, 100083, PR China
- Key Laboratory of Smart Farming Technologies for Aquatic Animal and Livestock, Ministry of Agriculture and Rural Affairs, Beijing, 100083, PR China
| | - Li Daoliang
- National Innovation Center for Digital Fishery, Beijing, 100083, PR China
- Key Laboratory of Smart Farming Technologies for Aquatic Animal and Livestock, Ministry of Agriculture and Rural Affairs, Beijing, 100083, PR China
- Beijing Engineering and Technology Research Center for Internet of Things in Agriculture, Beijing, 100083, PR China
- College of Information and Electrical Engineering, China Agricultural University, Beijing, 100083, PR China
| | - Chen Yingyi
- National Innovation Center for Digital Fishery, Beijing, 100083, PR China
- Key Laboratory of Smart Farming Technologies for Aquatic Animal and Livestock, Ministry of Agriculture and Rural Affairs, Beijing, 100083, PR China
- Beijing Engineering and Technology Research Center for Internet of Things in Agriculture, Beijing, 100083, PR China
- College of Information and Electrical Engineering, China Agricultural University, Beijing, 100083, PR China
| |
Collapse
|
3
|
Martínez-Ojeda RM, Mugnier LM, Artal P, Bueno JM. Blind deconvolution of second harmonic microscopy images of the living human eye. BIOMEDICAL OPTICS EXPRESS 2023; 14:2117-2128. [PMID: 37206134 PMCID: PMC10191662 DOI: 10.1364/boe.486989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 03/17/2023] [Accepted: 03/19/2023] [Indexed: 05/21/2023]
Abstract
Second harmonic generation (SHG) imaging microscopy of thick biological tissues is affected by the presence of aberrations and scattering within the sample. Moreover, additional problems, such as uncontrolled movements, appear when imaging in-vivo. Deconvolution methods can be used to overcome these limitations under some conditions. In particular, we present here a technique based on a marginal blind deconvolution approach for improving SHG images obtained in vivo in the human eye (cornea and sclera). Different image quality metrics are used to quantify the attained improvement. Collagen fibers in both cornea and sclera are better visualized and their spatial distributions accurately assessed. This might be a useful tool to better discriminate between healthy and pathological tissues, especially those where changes in collagen distribution occur.
Collapse
Affiliation(s)
- Rosa M. Martínez-Ojeda
- Laboratorio de Óptica,
Instituto Universitario de Investigación en
Óptica y Nanofísica, Universidad de
Murcia, Campus de Espinardo (Ed. 34), 30100 Murcia, Spain
| | | | - Pablo Artal
- Laboratorio de Óptica,
Instituto Universitario de Investigación en
Óptica y Nanofísica, Universidad de
Murcia, Campus de Espinardo (Ed. 34), 30100 Murcia, Spain
| | - Juan M. Bueno
- Laboratorio de Óptica,
Instituto Universitario de Investigación en
Óptica y Nanofísica, Universidad de
Murcia, Campus de Espinardo (Ed. 34), 30100 Murcia, Spain
| |
Collapse
|
4
|
Zhu B, Lv Q, Yang Y, Sui X, Zhang Y, Tang Y, Tan Z. Blind Deblurring of Remote-Sensing Single Images Based on Feature Alignment. SENSORS (BASEL, SWITZERLAND) 2022; 22:7894. [PMID: 36298241 PMCID: PMC9611111 DOI: 10.3390/s22207894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 10/03/2022] [Accepted: 10/14/2022] [Indexed: 06/16/2023]
Abstract
Motion blur recovery is a common method in the field of remote sensing image processing that can effectively improve the accuracy of detection and recognition. Among the existing motion blur recovery methods, the algorithms based on deep learning do not rely on a priori knowledge and, thus, have better generalizability. However, the existing deep learning algorithms usually suffer from feature misalignment, resulting in a high probability of missing details or errors in the recovered images. This paper proposes an end-to-end generative adversarial network (SDD-GAN) for single-image motion deblurring to address this problem and to optimize the recovery of blurred remote sensing images. Firstly, this paper applies a feature alignment module (FAFM) in the generator to learn the offset between feature maps to adjust the position of each sample in the convolution kernel and to align the feature maps according to the context; secondly, a feature importance selection module is introduced in the generator to adaptively filter the feature maps in the spatial and channel domains, preserving reliable details in the feature maps and improving the performance of the algorithm. In addition, this paper constructs a self-constructed remote sensing dataset (RSDATA) based on the mechanism of image blurring caused by the high-speed orbital motion of satellites. Comparative experiments are conducted on self-built remote sensing datasets and public datasets as well as on real remote sensing blurred images taken by an in-orbit satellite (CX-6(02)). The results show that the algorithm in this paper outperforms the comparison algorithm in terms of both quantitative evaluation and visual effects.
Collapse
Affiliation(s)
- Baoyu Zhu
- Aerospace Information Research Institute, Chinese Academy of Sciences, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
- School of Optoelectronics, University of Chinese Academy of Sciences, No.19(A) Yuquan Road, Shijingshan District, Beijing 100049, China
- Department of Key Laboratory of Computational Optical Imagine Technology, CAS, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
| | - Qunbo Lv
- Aerospace Information Research Institute, Chinese Academy of Sciences, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
- School of Optoelectronics, University of Chinese Academy of Sciences, No.19(A) Yuquan Road, Shijingshan District, Beijing 100049, China
- Department of Key Laboratory of Computational Optical Imagine Technology, CAS, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
| | - Yuanbo Yang
- Aerospace Information Research Institute, Chinese Academy of Sciences, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
- School of Optoelectronics, University of Chinese Academy of Sciences, No.19(A) Yuquan Road, Shijingshan District, Beijing 100049, China
- Department of Key Laboratory of Computational Optical Imagine Technology, CAS, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
| | - Xuefu Sui
- Aerospace Information Research Institute, Chinese Academy of Sciences, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
- School of Optoelectronics, University of Chinese Academy of Sciences, No.19(A) Yuquan Road, Shijingshan District, Beijing 100049, China
- Department of Key Laboratory of Computational Optical Imagine Technology, CAS, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
| | - Yu Zhang
- Aerospace Information Research Institute, Chinese Academy of Sciences, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
- School of Optoelectronics, University of Chinese Academy of Sciences, No.19(A) Yuquan Road, Shijingshan District, Beijing 100049, China
- Department of Key Laboratory of Computational Optical Imagine Technology, CAS, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
| | - Yinhui Tang
- Aerospace Information Research Institute, Chinese Academy of Sciences, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
- School of Optoelectronics, University of Chinese Academy of Sciences, No.19(A) Yuquan Road, Shijingshan District, Beijing 100049, China
- Department of Key Laboratory of Computational Optical Imagine Technology, CAS, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
| | - Zheng Tan
- Aerospace Information Research Institute, Chinese Academy of Sciences, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
- Department of Key Laboratory of Computational Optical Imagine Technology, CAS, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
| |
Collapse
|
5
|
Yoon S, Yang H, Seong W. Ray-based blind deconvolution with maximum kurtosis phase correction. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:4237. [PMID: 35778206 DOI: 10.1121/10.0011804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Accepted: 06/07/2022] [Indexed: 06/15/2023]
Abstract
Ray-based blind deconvolution (RBD) is a method that estimates the source waveform and channel impulse response (CIR) using the ray arrival in an underwater environment. The RBD estimates the phase of the source waveform by using beamforming. However, low sampling, array shape deformation, and other factors can cause phase errors in the beamforming results. In this paper, phase correction is applied to the beamforming estimated source phase to improve RBD performance. The impulsiveness of the CIR was used as additional information to correct the initially estimated source phase. Kurtosis was used to measure impulsiveness, and the phase correction that maximized the kurtosis of the CIRs was calculated through optimization. The proposed approach is called ray-based blind deconvolution with maximum kurtosis phase correction (RBD-MKPC) and is based on a single-input multiple-output system. The RBD-MKPC was tested with several CIRs and source waveform combinations in the shallow-water acoustic variability experiment 2015 using broadband high-frequency pulses (11-31 kHz) as the source and a sparse vertical 16-element line array as receivers. The results indicate that the RBD-MKPC improves the estimation performance. In addition, from an optimization point of view and compared with other initialization methods, the proposed method showed superior convergence speed and estimation performance.
Collapse
Affiliation(s)
- Seunghyun Yoon
- Department of Naval Architecture and Ocean Engineering, Seoul National University, Seoul 08826, Republic of Korea
| | - Haesang Yang
- Department of Naval Architecture and Ocean Engineering, Seoul National University, Seoul 08826, Republic of Korea
| | - Woojae Seong
- Department of Naval Architecture and Ocean Engineering, Seoul National University, Seoul 08826, Republic of Korea
| |
Collapse
|
6
|
Dong W, Du Y, Xu J, Dong F, Ren S. Spatially adaptive blind deconvolution methods for optical coherence tomography. Comput Biol Med 2022; 147:105650. [PMID: 35653849 DOI: 10.1016/j.compbiomed.2022.105650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 05/12/2022] [Accepted: 05/16/2022] [Indexed: 11/03/2022]
Abstract
Optical coherence tomography (OCT) is a powerful noninvasive imaging technique for detecting microvascular abnormalities. Following optical imaging principles, an OCT image will be blurred in the out-of-focus domain. Digital deconvolution is a commonly used method for image deblurring. However, the accuracy of traditional digital deconvolution methods, e.g., the Richardson-Lucy method, depends on the prior knowledge of the point spread function (PSF), which varies with the imaging depth and is difficult to determine. In this paper, a spatially adaptive blind deconvolution framework is proposed for recovering clear OCT images from blurred images without a known PSF. First, a depth-dependent PSF is derived from the Gaussian beam model. Second, the blind deconvolution problem is formalized as a regularized energy minimization problem using the least squares method. Third, the clear image and imaging depth are simultaneously recovered from blurry images using an alternating optimization method. To improve the computational efficiency of the proposed method, an accelerated alternating optimization method is proposed based on the convolution theorem and Fourier transform. The proposed method is numerically implemented with various regularization terms, including total variation, Tikhonov, and l1 norm terms. The proposed method is used to deblur synthetic and experimental OCT images. The influence of the regularization term on the deblurring performance is discussed. The results show that the proposed method can accurately deblur OCT images. The proposed acceleration method can significantly improve the computational efficiency of blind demodulation methods.
Collapse
Affiliation(s)
- Wenxue Dong
- Tianjin Key Laboratory of Process Measurement and Control, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China
| | - Yina Du
- Tianjin Key Laboratory of Process Measurement and Control, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China
| | - Jingjiang Xu
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Feng Dong
- Tianjin Key Laboratory of Process Measurement and Control, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China
| | - Shangjie Ren
- Tianjin Key Laboratory of Process Measurement and Control, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| |
Collapse
|
7
|
Jiang J, Zhou X, Liu J, Pan L, Pan Z, Zou F, Li Z, Li F, Ma X, Geng C, Zuo J, Li X. Optical Fiber Bundle-Based High-Speed and Precise Micro-Scanning for Image High-Resolution Reconstruction. SENSORS (BASEL, SWITZERLAND) 2021; 22:127. [PMID: 35009670 PMCID: PMC8747347 DOI: 10.3390/s22010127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 12/22/2021] [Accepted: 12/23/2021] [Indexed: 06/14/2023]
Abstract
We propose an imaging method based on optical fiber bundle combined with micro-scanning technique for improving image quality without complex image reconstruction algorithms. In the proposed method, a piezoelectric-ceramic-chip is used as the micro-displacement driver of the optical fiber bundle, which has the advantages of small volume, fast response speed and high precision. The corresponding displacement of the optical fiber bundle can be generated by precise voltage controlling. An optical fiber bundle with core/cladding diameter 4/80 μm and hexagonal arrangement is used to scan the 1951 USAF target. The scanning step is 1 μm, which is equivalent to the diffraction limit resolution of the optical system. The corresponding information is recorded at high speed through photo-detectors and a high-resolution image is obtained by image stitching processing. The minimum distinguishable stripe width of the proposed imaging technique with piezoelectric-ceramic-chip driven micro-scanning is approximately 2.1 μm, which is 1 time higher than that of direct imaging with a CCD camera whose pixel size is close to the fiber core size. The experimental results indicate that the optical fiber bundle combined with piezoelectric-ceramic-chip driven micro-scanning is a high-speed and high-precision technique for high-resolution imaging.
Collapse
Affiliation(s)
- Jiali Jiang
- Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China; (J.J.); (X.Z.); (J.L.); (L.P.); (Z.P.); (F.Z.); (Z.L.); (F.L.); (J.Z.); (X.L.)
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
| | - Xin Zhou
- Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China; (J.J.); (X.Z.); (J.L.); (L.P.); (Z.P.); (F.Z.); (Z.L.); (F.L.); (J.Z.); (X.L.)
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
- College of Materials Science and Opto-Electronic Technology, Chinese Academy of Sciences, Beijing 100049, China
| | - Jiaying Liu
- Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China; (J.J.); (X.Z.); (J.L.); (L.P.); (Z.P.); (F.Z.); (Z.L.); (F.L.); (J.Z.); (X.L.)
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
- College of Materials Science and Opto-Electronic Technology, Chinese Academy of Sciences, Beijing 100049, China
| | - Likang Pan
- Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China; (J.J.); (X.Z.); (J.L.); (L.P.); (Z.P.); (F.Z.); (Z.L.); (F.L.); (J.Z.); (X.L.)
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
- College of Materials Science and Opto-Electronic Technology, Chinese Academy of Sciences, Beijing 100049, China
| | - Ziting Pan
- Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China; (J.J.); (X.Z.); (J.L.); (L.P.); (Z.P.); (F.Z.); (Z.L.); (F.L.); (J.Z.); (X.L.)
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
- College of Materials Science and Opto-Electronic Technology, Chinese Academy of Sciences, Beijing 100049, China
| | - Fan Zou
- Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China; (J.J.); (X.Z.); (J.L.); (L.P.); (Z.P.); (F.Z.); (Z.L.); (F.L.); (J.Z.); (X.L.)
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
- College of Materials Science and Opto-Electronic Technology, Chinese Academy of Sciences, Beijing 100049, China
| | - Ziqiang Li
- Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China; (J.J.); (X.Z.); (J.L.); (L.P.); (Z.P.); (F.Z.); (Z.L.); (F.L.); (J.Z.); (X.L.)
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
| | - Feng Li
- Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China; (J.J.); (X.Z.); (J.L.); (L.P.); (Z.P.); (F.Z.); (Z.L.); (F.L.); (J.Z.); (X.L.)
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
| | - Xiaoyu Ma
- Chengdu Institute, Sichuan University of Arts and Science, Dazhou 635000, China;
| | - Chao Geng
- Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China; (J.J.); (X.Z.); (J.L.); (L.P.); (Z.P.); (F.Z.); (Z.L.); (F.L.); (J.Z.); (X.L.)
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
| | - Jing Zuo
- Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China; (J.J.); (X.Z.); (J.L.); (L.P.); (Z.P.); (F.Z.); (Z.L.); (F.L.); (J.Z.); (X.L.)
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
- College of Materials Science and Opto-Electronic Technology, Chinese Academy of Sciences, Beijing 100049, China
| | - Xinyang Li
- Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China; (J.J.); (X.Z.); (J.L.); (L.P.); (Z.P.); (F.Z.); (Z.L.); (F.L.); (J.Z.); (X.L.)
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
| |
Collapse
|
8
|
Askari Javaran T, Hassanpour H. Using a Blur Metric to Estimate Linear Motion Blur Parameters. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:6048137. [PMID: 34745327 PMCID: PMC8568521 DOI: 10.1155/2021/6048137] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Revised: 09/23/2021] [Accepted: 10/11/2021] [Indexed: 11/17/2022]
Abstract
Motion blur is a common artifact in image processing, specifically in e-health services, which is caused by the motion of a camera or scene. In linear motion cases, the blur kernel, i.e., the function that simulates the linear motion blur process, depends on the length and direction of blur, called linear motion blur parameters. The estimation of blur parameters is a vital and sensitive stage in the process of reconstructing a sharp version of a motion blurred image, i.e., image deblurring. The estimation of blur parameters can also be used in e-health services. Since medical images may be blurry, this method can be used to estimate the blur parameters and then take an action to enhance the image. In this paper, some methods are proposed for estimating the linear motion blur parameters based on the extraction of features from the given single blurred image. The motion blur direction is estimated using the Radon transform of the spectrum of the blurred image. To estimate the motion blur length, the relation between a blur metric, called NIDCT (Noise-Immune Discrete Cosine Transform-based), and the motion blur length is applied. Experiments performed in this study showed that the NIDCT blur metric and the blur length have a monotonic relation. Indeed, an increase in blur length leads to increase in the blurriness value estimated via the NIDCT blur metric. This relation is applied to estimate the motion blur. The efficiency of the proposed method is demonstrated by performing some quantitative and qualitative experiments.
Collapse
Affiliation(s)
- Taiebeh Askari Javaran
- Computer Science Department, Faculty of Mathematics and Computer, Higher Education Complex of Bam, Bam, Iran
| | - Hamid Hassanpour
- Image Processing and Data Mining (IPDM) Research Lab, Faculty of Computer Engineering and Information Technology, Shahrood University of Technology, Shahrood, Iran
| |
Collapse
|
9
|
Xu H, Wali R, Cheruiyot C, Bodenschatz J, Hasenfuss G, Janshoff A, Habeck M, Ebert A. Non-negative blind deconvolution for signal processing in a CRISPR-edited iPSC-cardiomyocyte model of dilated cardiomyopathy. FEBS Lett 2021; 595:2544-2557. [PMID: 34482543 DOI: 10.1002/1873-3468.14189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 08/13/2021] [Accepted: 08/31/2021] [Indexed: 11/06/2022]
Abstract
We developed an integrated platform for analysis of parameterized data from human disease models. We report a non-negative blind deconvolution (NNBD) approach to quantify calcium (Ca2+ ) handling, beating force and contractility in human-induced pluripotent stem cell-derived cardiomyocytes (iPSC-CMs) at the single-cell level. We employed CRISPR/Cas gene editing to introduce a dilated cardiomyopathy (DCM)-causing mutation in troponin T (TnT), TnT-R141W, into wild-type control iPSCs (MUT). The NNDB-based method enabled data parametrization, fitting and analysis in wild-type controls versus isogenic MUT iPSC-CMs. Of note, Cas9-edited TnT-R141W iPSC-CMs revealed significantly reduced beating force and prolonged contractile event duration. The NNBD-based platform provides an alternative framework for improved quantitation of molecular disease phenotypes and may contribute to the development of novel diagnostic tools.
Collapse
Affiliation(s)
- Hang Xu
- Heart Research Center, Department of Cardiology and Pneumology, University Medical Center, Goettingen University, Germany.,DZHK (German Center for Cardiovascular Research), Partner Site Goettingen, Germany
| | - Ruheen Wali
- Heart Research Center, Department of Cardiology and Pneumology, University Medical Center, Goettingen University, Germany.,DZHK (German Center for Cardiovascular Research), Partner Site Goettingen, Germany
| | - Cleophas Cheruiyot
- Heart Research Center, Department of Cardiology and Pneumology, University Medical Center, Goettingen University, Germany.,DZHK (German Center for Cardiovascular Research), Partner Site Goettingen, Germany
| | | | - Gerd Hasenfuss
- Heart Research Center, Department of Cardiology and Pneumology, University Medical Center, Goettingen University, Germany.,DZHK (German Center for Cardiovascular Research), Partner Site Goettingen, Germany
| | - Andreas Janshoff
- Institute for Physical Chemistry, Goettingen University, Germany
| | | | - Antje Ebert
- Heart Research Center, Department of Cardiology and Pneumology, University Medical Center, Goettingen University, Germany.,DZHK (German Center for Cardiovascular Research), Partner Site Goettingen, Germany
| |
Collapse
|
10
|
Deblurring Turbulent Images via Maximizing L1 Regularization. Symmetry (Basel) 2021. [DOI: 10.3390/sym13081414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Atmospheric turbulence significantly degrades image quality. A blind image deblurring algorithm is needed, and a favorable image prior is the key to solving this problem. However, the general sparse priors support blurry images instead of explicit images, so the details of the restored images are lost. The recently developed priors are non-convex, resulting in complex and heuristic optimization. To handle these problems, we first propose a convex image prior; namely, maximizing L1 regularization (ML1). Benefiting from the symmetrybetween ML1 and L1 regularization, the ML1 supports clear images and preserves the image edges better. Then, a novel soft suppression strategy is designed for the deblurring algorithm to inhibit artifacts. A coarse-to-fine scheme and a non-blind algorithm are also constructed. For qualitative comparison, a turbulent blur dataset is built. Experiments on this dataset and real images demonstrate that the proposed method is superior to other state-of-the-art methods in blindly recovering turbulent images.
Collapse
|
11
|
Improvement and Assessment of a Blind Image Deblurring Algorithm Based on Independent Component Analysis. COMPUTATION 2021. [DOI: 10.3390/computation9070076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The aim of the present paper is to improve an existing blind image deblurring algorithm, based on an independent component learning paradigm, by manifold calculus. The original technique is based on an independent component analysis algorithm applied to a set of pseudo-images obtained by Gabor-filtering a blurred image and is based on an adapt-and-project paradigm. A comparison between the original technique and the improved method shows that independent component learning on the unit hypersphere by a Riemannian-gradient algorithm outperforms the adapt-and-project strategy. A comprehensive set of numerical tests evidenced the strengths and weaknesses of the discussed deblurring technique.
Collapse
|
12
|
Balancing Heterogeneous Image Quality for Improved Cross-Spectral Face Recognition. SENSORS 2021; 21:s21072322. [PMID: 33810407 PMCID: PMC8038120 DOI: 10.3390/s21072322] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 03/18/2021] [Accepted: 03/23/2021] [Indexed: 12/02/2022]
Abstract
Matching infrared (IR) facial probes against a gallery of visible light faces remains a challenge, especially when combined with cross-distance due to deteriorated quality of the IR data. In this paper, we study the scenario where visible light faces are acquired at a short standoff, while IR faces are long-range data. To address the issue of quality imbalance between the heterogeneous imagery, we propose to compensate it by upgrading the lower-quality IR faces. Specifically, this is realized through cascaded face enhancement that combines an existing denoising algorithm (BM3D) with a new deep-learning-based deblurring model we propose (named SVDFace). Different IR bands, short-wave infrared (SWIR) and near-infrared (NIR), as well as different standoffs, are involved in the experiments. Results show that, in all cases, our proposed approach for quality balancing yields improved recognition performance, which is especially effective when involving SWIR images at a longer standoff. Our approach outperforms another easy and straightforward downgrading approach. The cascaded face enhancement structure is also shown to be beneficial and necessary. Finally, inspired by the singular value decomposition (SVD) theory, the proposed deblurring model of SVDFace is succinct, efficient and interpretable in structure. It is proven to be advantageous over traditional deblurring algorithms as well as state-of-the-art deep-learning-based deblurring algorithms.
Collapse
|
13
|
Francavilla MA, Lefkimmiatis S, Villena JF, G Polimeridis A. Maxwell parallel imaging. Magn Reson Med 2021; 86:1573-1585. [PMID: 33733495 DOI: 10.1002/mrm.28718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 01/15/2021] [Accepted: 01/15/2021] [Indexed: 11/09/2022]
Abstract
PURPOSE To develop a general framework for parallel imaging (PI) with the use of Maxwell regularization for the estimation of the sensitivity maps (SMs) and constrained optimization for the parameter-free image reconstruction. THEORY AND METHODS Certain characteristics of both the SMs and the images are routinely used to regularize the otherwise ill-posed optimization-based joint reconstruction from highly accelerated PI data. In this paper, we rely on a fundamental property of SMs-they are solutions of Maxwell equations-we construct the subspace of all possible SM distributions supported in a given field-of-view, and we promote solutions of SMs that belong in this subspace. In addition, we propose a constrained optimization scheme for the image reconstruction, as a second step, once an accurate estimation of the SMs is available. The resulting method, dubbed Maxwell parallel imaging (MPI), works for both 2D and 3D, with Cartesian and radial trajectories, and minimal calibration signals. RESULTS The effectiveness of MPI is illustrated for various undersampling schemes, including radial, variable-density Poisson-disc, and Cartesian, and is compared against the state-of-the-art PI methods. Finally, we include some numerical experiments that demonstrate the memory footprint reduction of the constructed Maxwell basis with the help of tensor decomposition, thus allowing the use of MPI for full 3D image reconstructions. CONCLUSION The MPI framework provides a physics-inspired optimization method for the accurate and efficient image reconstruction from arbitrary accelerated scans.
Collapse
|
14
|
Liu J, Yan M, Zeng T. Surface-Aware Blind Image Deblurring. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:1041-1055. [PMID: 31535982 DOI: 10.1109/tpami.2019.2941472] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Blind image deblurring is a conundrum because there are infinitely many pairs of latent image and blur kernel. To get a stable and reasonable deblurred image, proper prior knowledge of the latent image and the blur kernel is urgently required. Different from the recent works on the statistical observations of the difference between the blurred image and the clean one, our method is built on the surface-aware strategy arising from the intrinsic geometrical consideration. This approach facilitates the blur kernel estimation due to the preserved sharp edges in the intermediate latent image. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods on deblurring the text and natural images. Moreover, our method can achieve attractive results in some challenging cases, such as low-illumination images with large saturated regions and impulse noise. A direct extension of our method to the non-uniform deblurring problem also validates the effectiveness of the surface-aware prior.
Collapse
|
15
|
Zhang Y, Lau Y, Kuo HW, Cheung S, Pasupathy A, Wright J. On the Global Geometry of Sphere-Constrained Sparse Blind Deconvolution. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:999-1008. [PMID: 31494544 DOI: 10.1109/tpami.2019.2939237] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Blind deconvolution is the problem of recovering a convolutional kernel a0 and an activation signal x0 from their convolution [Formula: see text]. This problem is ill-posed without further constraints or priors. This paper studies the situation where the nonzero entries in the activation signal are sparsely and randomly populated. We normalize the convolution kernel to have unit Frobenius norm and cast the sparse blind deconvolution problem as a nonconvex optimization problem over the sphere. With this spherical constraint, every spurious local minimum turns out to be close to some signed shift truncation of the ground truth, under certain hypotheses. This benign property motivates an effective two stage algorithm that recovers the ground truth from the partial information offered by a suboptimal local minimum. This geometry-inspired algorithm recovers the ground truth for certain microscopy problems, also exhibits promising performance in the more challenging image deblurring problem. Our insights into the global geometry and the two stage algorithm extend to the convolutional dictionary learning problem, where a superposition of multiple convolution signals is observed.
Collapse
|
16
|
Shao WZ, Lin YZ, Liu YY, Wang LQ, Ge Q, Bao BK, Li HB. Gradient-based discriminative modeling for blind image deblurring. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.06.093] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
17
|
Hu X, Zhang S, Zhang Y, Liu Y, Wang G. Large depth-of-field three-dimensional shape measurement with the focal sweep technique. OPTICS EXPRESS 2020; 28:31197-31208. [PMID: 33115098 DOI: 10.1364/oe.404260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 09/24/2020] [Indexed: 06/11/2023]
Abstract
Three-dimensional (3D) shape measurement based on the fringe projection technique has been extensively used for scientific discoveries and industrial practices. Yet, one of the most challenging issues is its limited depth of field (DOF). This paper presents a method to drastically increase DOF of 3D shape measurement technique by employing the focal sweep method. The proposed method employs an electrically tunable lens (ETL) to rapidly sweep the focal plane during image integration and the post deconvolution algorithm to reconstruct focused images for 3D reconstruction. Experimental results demonstrated that our proposed method can achieve high-resolution and high-accuracy 3D shape measurement with greatly improved DOF in real time.
Collapse
|
18
|
Kotera J, Matas J, Sroubek F. Restoration of fast moving objects. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; PP:8577-8589. [PMID: 32813657 DOI: 10.1109/tip.2020.3016490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
If an object is photographed at motion in front of a static background, the object will be blurred while the background sharp and partially occluded by the object. The goal is to recover the object appearance from such blurred image. We adopt the image formation model for fast moving objects and consider objects undergoing 2D translation and rotation. For this scenario we formulate the estimation of the object shape, appearance, and motion from a single image and known background as a constrained optimization problem with appropriate regularization terms. Both similarities and differences with blind deconvolution are discussed with the latter caused mainly by the coupling of the object appearance and shape in the acquisition model. Necessary conditions for solution uniqueness are derived and a numerical solution based on the alternating direction method of multipliers is presented. The proposed method is evaluated on a new dataset.
Collapse
|
19
|
Chen X, Zhu Y, Liu W, Sun J, Zhang Y. Blur kernel estimation of noisy-blurred image via dynamic structure prior. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.03.067] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
20
|
|
21
|
Shajkofci A, Liebling M. Spatially-Variant CNN-based Point Spread Function Estimation for Blind Deconvolution and Depth Estimation in Optical Microscopy. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:5848-5861. [PMID: 32305918 DOI: 10.1109/tip.2020.2986880] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Optical microscopy is an essential tool in biology and medicine. Imaging thin, yet non-flat objects in a single shot (without relying on more sophisticated sectioning setups) remains challenging as the shallow depth of field that comes with highresolution microscopes leads to unsharp image regions and makes depth localization and quantitative image interpretation difficult. Here, we present a method that improves the resolution of light microscopy images of such objects by locally estimating image distortion while jointly estimating object distance to the focal plane. Specifically, we estimate the parameters of a spatiallyvariant Point Spread Function (PSF) model using a Convolutional Neural Network (CNN), which does not require instrument- or object-specific calibration. Our method recovers PSF parameters from the image itself with up to a squared Pearson correlation coefficient of 0.99 in ideal conditions, while remaining robust to object rotation, illumination variations, or photon noise. When the recovered PSFs are used with a spatially-variant and regularized Richardson-Lucy (RL) deconvolution algorithm, we observed up to 2.1 dB better Signal-to-Noise Ratio (SNR) compared to other Blind Deconvolution (BD) techniques. Following microscope-specific calibration, we further demonstrate that the recovered PSF model parameters permit estimating surface depth with a precision of 2 micrometers and over an extended range when using engineered PSFs. Our method opens up multiple possibilities for enhancing images of non-flat objects with minimal need for a priori knowledge about the optical setup.
Collapse
|
22
|
|
23
|
Cheung SC, Shin JY, Lau Y, Chen Z, Sun J, Zhang Y, Müller MA, Eremin IM, Wright JN, Pasupathy AN. Dictionary learning in Fourier-transform scanning tunneling spectroscopy. Nat Commun 2020; 11:1081. [PMID: 32102995 PMCID: PMC7044214 DOI: 10.1038/s41467-020-14633-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 01/17/2020] [Indexed: 11/15/2022] Open
Abstract
Modern high-resolution microscopes are commonly used to study specimens that have dense and aperiodic spatial structure. Extracting meaningful information from images obtained from such microscopes remains a formidable challenge. Fourier analysis is commonly used to analyze the structure of such images. However, the Fourier transform fundamentally suffers from severe phase noise when applied to aperiodic images. Here, we report the development of an algorithm based on nonconvex optimization that directly uncovers the fundamental motifs present in a real-space image. Apart from being quantitatively superior to traditional Fourier analysis, we show that this algorithm also uncovers phase sensitive information about the underlying motif structure. We demonstrate its usefulness by studying scanning tunneling microscopy images of a Co-doped iron arsenide superconductor and prove that the application of the algorithm allows for the complete recovery of quasiparticle interference in this material. Aperiodic structure imaging suffers limitations when utilizing Fourier analysis. The authors report an algorithm that quantitatively overcomes these limitations based on nonconvex optimization, demonstrated by studying aperiodic structures via the phase sensitive interference in STM images.
Collapse
Affiliation(s)
- Sky C Cheung
- Department of Physics, Columbia University, New York, NY, 10027, USA
| | - John Y Shin
- Department of Physics, Columbia University, New York, NY, 10027, USA
| | - Yenson Lau
- Department of Electrical Engineering, Columbia University, New York, NY, 10027, USA
| | - Zhengyu Chen
- Department of Electrical Engineering, Columbia University, New York, NY, 10027, USA
| | - Ju Sun
- Department of Electrical Engineering, Columbia University, New York, NY, 10027, USA
| | - Yuqian Zhang
- Department of Electrical Engineering, Columbia University, New York, NY, 10027, USA
| | - Marvin A Müller
- Institut für Theoretische Physik III, Ruhr-Universität Bochum, 44801, Bochum, Germany
| | - Ilya M Eremin
- Institut für Theoretische Physik III, Ruhr-Universität Bochum, 44801, Bochum, Germany.,National University of Science and Technology MISiS, 119049, Moscow, Russian Federation
| | - John N Wright
- Department of Electrical Engineering, Columbia University, New York, NY, 10027, USA.
| | - Abhay N Pasupathy
- Department of Physics, Columbia University, New York, NY, 10027, USA.
| |
Collapse
|
24
|
Hehn L, Tilley S, Pfeiffer F, Stayman JW. Blind deconvolution in model-based iterative reconstruction for CT using a normalized sparsity measure. Phys Med Biol 2019; 64:215010. [PMID: 31561247 DOI: 10.1088/1361-6560/ab489e] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Model-based iterative reconstruction techniques for CT that include a description of the noise statistics and a physical forward model of the image formation process have proven to increase image quality for many applications. Specifically, including models of the system blur into the physical forward model and thus implicitly performing a deconvolution of the projections during tomographic reconstruction, could demonstrate distinct improvements, especially in terms of resolution. However, the results strongly rely on an exact characterization of all components contributing to the system blur. Such characterizations can be laborious and even a slight mismatch can diminish image quality significantly. Therefore, we introduce a novel objective function, which enables us to jointly estimate system blur parameters during tomographic reconstruction. Conventional objective functions are biased in terms of blur and can yield lowest cost to blurred reconstructions with low noise levels. A key feature of our objective function is a new normalized sparsity measure for CT based on total-variation regularization, constructed to be less biased in terms of blur. We outline a solving strategy for jointly recovering low-dimensional blur parameters during tomographic reconstruction. We perform an extensive simulation study, evaluating the performance of the regularization and the dependency of the different parts of the objective function on the blur parameters. Scenarios with different regularization strengths and system blurs are investigated, demonstrating that we can recover the blur parameter used for the simulations. The proposed strategy is validated and the dependency of the objective function with the number of iterations is analyzed. Finally, our approach is experimentally validated on test-bench data of a human wrist phantom, where the estimated blur parameter coincides well with visual inspection. Our findings are not restricted to attenuation-based CT and may facilitate the recovery of more complex imaging model parameters.
Collapse
Affiliation(s)
- Lorenz Hehn
- Chair of Biomedical Physics, Department of Physics and Munich School of BioEngineering, Technical University of Munich, 85748 Garching, Germany. Department of Diagnostic and Interventional Radiology, School of Medicine & Klinikum rechts der Isar, Technical University of Munich, 81675 München, Germany. Author to whom correspondence should be addressed
| | | | | | | |
Collapse
|
25
|
Affiliation(s)
- Yicheng Kang
- Department of Mathematical Sciences, Bentley University, Waltham, MA
| |
Collapse
|
26
|
Anwar S, Huynh CP, Porikli F. Image Deblurring with a Class-Specific Prior. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2019; 41:2112-2130. [PMID: 30004871 DOI: 10.1109/tpami.2018.2855177] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
A fundamental problem in image deblurring is to recover reliably distinct spatial frequencies that have been suppressed by the blur kernel. To tackle this issue, existing image deblurring techniques often rely on generic image priors such as the sparsity of salient features including image gradients and edges. However, these priors only help recover part of the frequency spectrum, such as the frequencies near the high-end. To this end, we pose the following specific questions: (i) Does any image class information offer an advantage over existing generic priors for image quality restoration? (ii) If a class-specific prior exists, how should it be encoded into a deblurring framework to recover attenuated image frequencies? Throughout this work, we devise a class-specific prior based on the band-pass filter responses and incorporate it into a deblurring strategy. More specifically, we show that the subspace of band-pass filtered images and their intensity distributions serve as useful priors for recovering image frequencies that are difficult to recover by generic image priors. We demonstrate that our image deblurring framework, when equipped with the above priors, significantly outperforms many state-of-the-art methods using generic image priors or class-specific exemplars.
Collapse
|
27
|
Lee H, Jung C, Kim C. Blind Deblurring of Text Images Using a Text-Specific Hybrid Dictionary. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:710-723. [PMID: 31425032 DOI: 10.1109/tip.2019.2933739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this paper, we propose a blind text image deblurring algorithm by using a text-specific hybrid dictionary. After careful analysis, we find that the text-specific hybrid dictionary has the great ability of providing powerful contextual information for text image deblurring. Here, it is worth noting that our proposed method is inspired by our observation that an intermediate latent image contains not only sharp regions, but also multiple types of small blurred regions. Based upon our discovery, we propose a prior for text images based on sparse representation, which models the relationship between an intermediate latent image and a desired sharp image. To this end, we carefully collect three different image patch pairs, which are 1) Gaussian blur-sharp, 2) motion blur-sharp, and 3) sharp-sharp, in order to construct the text-specific hybrid dictionary. We also propose a new optimization framework suitable for the task of text image deblurring in this paper. Extensive experiments have been conducted on a challenging dataset of synthetic and real-world text images. Our results demonstrate that the proposed method outperforms the state-of-the-art image deblurring methods both quantitatively and qualitatively.
Collapse
|
28
|
High-resolution dynamic inversion imaging with motion-aberrations-free using optical flow learning networks. Sci Rep 2019; 9:11319. [PMID: 31383880 PMCID: PMC6683134 DOI: 10.1038/s41598-019-47564-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Accepted: 07/19/2019] [Indexed: 11/20/2022] Open
Abstract
Dynamic optical imaging (e.g. time delay integration imaging) is troubled by the motion blur fundamentally arising from mismatching between photo-induced charge transfer and optical image movements. Motion aberrations from the forward dynamic imaging link impede the acquiring of high-quality images. Here, we propose a high-resolution dynamic inversion imaging method based on optical flow neural learning networks. Optical flow is reconstructed via a multilayer neural learning network. The optical flow is able to construct the motion spread function that enables computational reconstruction of captured images with a single digital filter. This works construct the complete dynamic imaging link, involving the backward and forward imaging link, and demonstrates the capability of the back-ward imaging by reducing motion aberrations.
Collapse
|
29
|
Hosseini MS, Plataniotis KN. Convolutional Deblurring for Natural Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:250-264. [PMID: 31380758 DOI: 10.1109/tip.2019.2929865] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of finite impulse response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the point spread function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.
Collapse
|
30
|
Shao WZ, Bao BK, Li HB. Enhancing Blurred Low-Resolution Images via Exploring the Potentials of Learning-Based Super-Resolution. INT J PATTERN RECOGN 2019. [DOI: 10.1142/s021800141940007x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper aims to propose a candidate solution to the challenging task of single-image blind super-resolution (SR), via extensively exploring the potentials of learning-based SR schemes in the literature. The task is formulated into an energy functional to be minimized with respect to both an intermediate super-resolved image and a nonparametric blur-kernel. The functional includes a so-called convolutional consistency term which incorporates a nonblind learning-based SR result to better guide the kernel estimation process, and a bi-[Formula: see text]-[Formula: see text]-norm regularization imposed on both the super-resolved sharp image and the nonparametric blur-kernel. A numerical algorithm is deduced via coupling the splitting augmented Lagrangian (SAL) and the conjugate gradient (CG) method. With the estimated blur-kernel, the final SR image is reconstructed using a simple TV-based nonblind SR method. The proposed blind SR approach is demonstrated to achieve better performance than [T. Michaeli and M. Irani, Nonparametric Blind Super-resolution, in Proc. IEEE Conf. Comput. Vision (IEEE Press, Washington, 2013), pp. 945–952.] in terms of both blur-kernel estimation accuracy and image ehancement quality. In the meanwhile, the experimental results demonstrate surprisingly that the local linear regression-based SR method, anchored neighbor regression (ANR) serves the proposed functional more appropriately than those harnessing the deep convolutional neural networks.
Collapse
Affiliation(s)
- Wen-Ze Shao
- College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, P. R. China
- National Engineering Research Center of Communications and Networking, Nanjing University of Posts and Telecommunications, Nanjing, P. R. China
| | - Bing-Kun Bao
- College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, P. R. China
| | - Hai-Bo Li
- College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, P. R. China
- School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
31
|
Chung J, Martinez GW, Lencioni KC, Sadda SR, Yang C. Computational aberration compensation by coded-aperture-based correction of aberration obtained from optical Fourier coding and blur estimation. OPTICA 2019; 6:647-661. [PMID: 33134437 PMCID: PMC7597901 DOI: 10.1364/optica.6.000647] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
We report a novel generalized optical measurement system and computational approach to determine and correct aberrations in optical systems. The system consists of a computational imaging method capable of reconstructing an optical system's pupil function by adapting overlapped Fourier coding to an incoherent imaging modality. It recovers the high-resolution image latent in an aberrated image via deconvolution. The deconvolution is made robust to noise by using coded apertures to capture images. We term this method coded-aperture-based correction of aberration obtained from overlapped Fourier coding and blur estimation (CACAO-FB). It is well-suited for various imaging scenarios where aberration is present and where providing a spatially coherent illumination is very challenging or impossible. We report the demonstration of CACAO-FB with a variety of samples including an in vivo imaging experiment on the eye of a rhesus macaque to correct for its inherent aberration in the rendered retinal images. CACAO-FB ultimately allows for an aberrated imaging system to achieve diffraction-limited performance over a wide field of view by casting optical design complexity to computational algorithms in post-processing.
Collapse
Affiliation(s)
- Jaebum Chung
- Department of Electrical Engineering, California Institute of Technology, Pasadena, California 91125, USA
- Corresponding author:
| | - Gloria W. Martinez
- Office of Laboratory Animal Resources, California Institute of Technology, Pasadena, California 91125, USA
| | - Karen C. Lencioni
- Office of Laboratory Animal Resources, California Institute of Technology, Pasadena, California 91125, USA
| | - Srinivas R. Sadda
- Doheny Eye Institute, University of California-Los Angeles, Los Angeles, California 90033, USA
| | - Changhuei Yang
- Department of Electrical Engineering, California Institute of Technology, Pasadena, California 91125, USA
| |
Collapse
|
32
|
Park B, Lee H, Jeon S, Ahn J, Kim HH, Kim C. Reflection-mode switchable subwavelength Bessel-beam and Gaussian-beam photoacoustic microscopy in vivo. JOURNAL OF BIOPHOTONICS 2019; 12:e201800215. [PMID: 30084200 DOI: 10.1002/jbio.201800215] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Accepted: 08/03/2018] [Indexed: 05/11/2023]
Abstract
We have developed a reflection-mode switchable subwavelength Bessel-beam (BB) and Gaussian-beam (GB) photoacoustic microscopy (PAM) system. To achieve both reflection-mode and high resolution, we tightly attached a very small ultrasound transducer to an optical objective lens with numerical aperture of 1.0 and working distance of 2.5 mm. We used axicon and an achromatic doublet in our system to obtain the extended depth of field (DOF) of the BB. To compare the DOF performance achieved with our BB-PAM system against GB-PAM system, we designed our system so that the GB can be easily generated by simply removing the lenses. Using a 532 nm pulse laser, we achieved the lateral resolutions of 300 and 270 nm for BB-PAM and GB-PAM, respectively. The measured DOF of BB-PAM was approximately 229 μm, which was about 7× better than that of GB-PAM. We imaged the vasculature of a mouse ear using BB-PAM and GB-PAM and confirmed that the DOF of BB-PAM is much better than the DOF of GB-PAM. Thus, we believe that the high resolution achieved at the extended DOF by our system is very practical for wide range of biomedical research including red blood cell (RBC) migration in blood vessels at various depths and observation of cell migration or cell culture.
Collapse
Affiliation(s)
- Byullee Park
- Department of Creative IT Engineering, Pohang University of Science and Technology, Pohang, Republic of Korea
| | - Hoyong Lee
- Department of Creative IT Engineering, Pohang University of Science and Technology, Pohang, Republic of Korea
| | - Seungwan Jeon
- Department of Creative IT Engineering, Pohang University of Science and Technology, Pohang, Republic of Korea
| | - Joongho Ahn
- Department of Creative IT Engineering, Pohang University of Science and Technology, Pohang, Republic of Korea
| | - Hyung H Kim
- Department of Creative IT Engineering, Pohang University of Science and Technology, Pohang, Republic of Korea
| | - Chulhong Kim
- Department of Creative IT Engineering, Pohang University of Science and Technology, Pohang, Republic of Korea
| |
Collapse
|
33
|
|
34
|
Bai Y, Cheung G, Liu X, Gao W. Graph-Based Blind Image Deblurring From a Single Photograph. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 28:1404-1418. [PMID: 30307861 DOI: 10.1109/tip.2018.2874290] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Blind image deblurring, i.e., deblurring without knowledge of the blur kernel, is a highly ill-posed problem. The problem can be solved in two parts: i) estimate a blur kernel from the blurry image, and ii) given an estimated blur kernel, de-convolve the blurry input to restore the target image. In this paper, we propose a graph-based blind image deblurring algorithm by interpreting an image patch as a signal on a weighted graph. Specifically, we first argue that a skeleton image-a proxy that retains the strong gradients of the target but smooths out the details-can be used to accurately estimate the blur kernel and has a unique bi-modal edge weight distribution. Then, we design a reweighted graph total variation (RGTV) prior that can efficiently promote a bi-modal edge weight distribution given a blurry patch. Further, to analyze RGTV in the graph frequency domain, we introduce a new weight function to represent RGTV as a graph l1-Laplacian regularizer. This leads to a graph spectral filtering interpretation of the prior with desirable properties, including robustness to noise and blur, strong piecewise smooth (PWS) filtering and sharpness promotion. Minimizing a blind image deblurring objective with RGTV results in a non-convex non-differentiable optimization problem. Leveraging the new graph spectral interpretation for RGTV, we design an efficient algorithm that solves for the skeleton image and the blur kernel alternately. Specifically for Gaussian blur, we propose a further speedup strategy for blind Gaussian deblurring using accelerated graph spectral filtering. Finally, with the computed blur kernel, recent non-blind image deblurring algorithms can be applied to restore the target image. Experimental results demonstrate that our algorithm successfully restores latent sharp images and outperforms state-of-the-art methods quantitatively and qualitatively.
Collapse
|
35
|
Zachevsky I, Zeevi YY. Blind deblurring of natural stochastic textures using an anisotropic fractal model and phase retrieval algorithm. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 28:937-951. [PMID: 30296232 DOI: 10.1109/tip.2018.2874291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The challenging inverse problem of blind deblurring has been investigated thoroughly for natural images. Existing algorithms exploit edge-type structures, or similarity to smaller patches within the image, to estimate the correct blurring kernel. However, these methods do not perform well enough on natural stochastic textures (NST), which are mostly random and in general are not characterized by distinct edges and contours. In NST even small kernels cause severe degradation to images. Restoration poses therefore an outstanding challenge. In this work, we refine an existing method by implementing an anisotropic fractal model to estimate the blur kernel's power spectral density. The final kernel is then estimated via an adaptation of a phase retrieval algorithm, originally proposed for sparse signals. We further incorporate additional constraints that are specific to blur filters, to yield even better results. The latter are compared with results obtained by recently published blind deblurring methods.
Collapse
|
36
|
Pandey A, Gregory JW. Iterative Blind Deconvolution Algorithm for Deblurring a Single PSP/TSP Image of Rotating Surfaces. SENSORS 2018; 18:s18093075. [PMID: 30217038 PMCID: PMC6163952 DOI: 10.3390/s18093075] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Revised: 09/04/2018] [Accepted: 09/10/2018] [Indexed: 11/16/2022]
Abstract
Imaging of pressure-sensitive paint (PSP) for pressure measurement on moving surfaces is problematic due to the movement of the object within the finite exposure time of the imager, resulting in the blurring of the blade edges. The blurring problem is particularly challenging when high-sensitivity PSP with a long lifetime is used, where the long luminescence time constant of exponential light decay following a burst of excitation light energy results in blurred images. One method to ameliorate this effect is image deconvolution using a point spread function (PSF) based on an estimation of the luminescent time constant. Prior implementations of image deconvolution for PSP deblurring have relied upon a spatially invariant time constant in order to reduce computational time. However, the use of an assumed value of time constant leads to errors in the point spread function, particularly when strong pressure gradients (which cause strong spatial gradients in the decay time constant) are involved. This work introduces an iterative method of image deconvolution, where a spatially variant PSF is used. The point-by-point PSF values are found in an iterative manner, since the time constant depends on the local pressure value, which can only be found from the reduced PSP data. The scheme estimates a super-resolved spatially varying blur kernel with sub-pixel resolution without filtering the blurred image, and then restores the image using classical iterative regularization tools. A kernel-free forward model has been used to generate test images with known pressure surface maps and a varying amount of noise to evaluate the applicability of this scheme in different experimental conditions. A spinning disk setup with a grazing nitrogen jet for producing strong pressure gradients has also been used to evaluate the scheme on a real-world problem. Results including the convergence history and the effect of a regularization-iteration count are shown, along with a comparison with the previous PSP deblurring method.
Collapse
Affiliation(s)
- Anshuman Pandey
- Aerospace Research Center, The Ohio State University, 2300 West Case Road, Columbus, OH 43235, USA.
| | - James W Gregory
- Aerospace Research Center, The Ohio State University, 2300 West Case Road, Columbus, OH 43235, USA.
| |
Collapse
|
37
|
Boorboor S, Jadhav, Ananth M, Talmage D, Role, Kaufman A. Visualization of Neuronal Structures in Wide-Field Microscopy Brain Images. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:10.1109/TVCG.2018.2864852. [PMID: 30136950 PMCID: PMC6382602 DOI: 10.1109/tvcg.2018.2864852] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Wide-field microscopes are commonly used in neurobiology for experimental studies of brain samples. Available visualization tools are limited to electron, two-photon, and confocal microscopy datasets, and current volume rendering techniques do not yield effective results when used with wide-field data. We present a workflow for the visualization of neuronal structures in wide-field microscopy images of brain samples. We introduce a novel gradient-based distance transform that overcomes the out-of-focus blur caused by the inherent design of wide-field microscopes. This is followed by the extraction of the 3D structure of neurites using a multi-scale curvilinear filter and cell-bodies using a Hessian-based enhancement filter. The response from these filters is then applied as an opacity map to the raw data. Based on the visualization challenges faced by domain experts, our workflow provides multiple rendering modes to enable qualitative analysis of neuronal structures, which includes separation of cell-bodies from neurites and an intensity-based classification of the structures. Additionally, we evaluate our visualization results against both a standard image processing deconvolution technique and a confocal microscopy image of the same specimen. We show that our method is significantly faster and requires less computational resources, while producing high quality visualizations. We deploy our workflow in an immersive gigapixel facility as a paradigm for the processing and visualization of large, high-resolution, wide-field microscopy brain datasets.
Collapse
|
38
|
Li J, Gong W, Li W. Combining Motion Compensation with Spatiotemporal Constraint for Video Deblurring. SENSORS 2018; 18:s18061774. [PMID: 29865162 PMCID: PMC6022012 DOI: 10.3390/s18061774] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2018] [Revised: 04/27/2018] [Accepted: 05/25/2018] [Indexed: 11/16/2022]
Abstract
We propose a video deblurring method by combining motion compensation with spatiotemporal constraint for restoring blurry video caused by camera shake. The proposed method makes effective full use of the spatiotemporal information not only in the blur kernel estimation, but also in the latent sharp frame restoration. Firstly, we estimate a motion vector between the current and the previous blurred frames, and introduce the estimated motion vector for deriving the motion-compensated frame with the previous restored frame. Secondly, we proposed a blur kernel estimation strategy by applying the derived motion-compensated frame to an improved regularization model for improving the quality of the estimated blur kernel and reducing the processing time. Thirdly, we propose a spatiotemporal constraint algorithm that can not only enhance temporal consistency, but also suppress noise and ringing artifacts of the deblurred video through introducing a temporal regularization term. Finally, we extend Fast Total Variation de-convolution (FTVd) for solving the minimization problem of the proposed spatiotemporal constraint energy function. Extensive experiments demonstrate that the proposed method achieve the state-of-the-art results either in subjective vision or objective evaluation.
Collapse
Affiliation(s)
- Jing Li
- Key Lab of Optoelectronic Technology & Systems of Education Ministry, Chongqing University, Chongqing 400044, China.
| | - Weiguo Gong
- Key Lab of Optoelectronic Technology & Systems of Education Ministry, Chongqing University, Chongqing 400044, China.
| | - Weihong Li
- Key Lab of Optoelectronic Technology & Systems of Education Ministry, Chongqing University, Chongqing 400044, China.
| |
Collapse
|
39
|
Chandramouli P, Jin M, Perrone D, Favaro P. Plenoptic Image Motion Deblurring. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:1723-1734. [PMID: 29346091 DOI: 10.1109/tip.2017.2775062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
Collapse
|
40
|
Mosleh A, Sola YE, Zargari F, Onzon E, Langlois JMP. Explicit Ringing Removal in Image Deblurring. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:580-593. [PMID: 29136610 DOI: 10.1109/tip.2017.2764625] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this paper, we present a simple yet effective image deblurring method to produce ringing-free deblurred images. Our work is inspired by the observation that large-scale deblurring ringing artifacts are measurable through a multi-resolution pyramid of low-pass filtering of the blurred-deblurred image pair. We propose to model such a quantification as a convex cost function and minimize it directly in the deblurring process in order to reduce ringing regardless of its cause. An efficient primal-dual algorithm is proposed as a solution to this optimization problem. Since the regularization is more biased toward ringing patterns, the details of the reconstructed image are prevented from over-smoothing. An inevitable source of ringing is sensor saturation which can be detected costlessly contrary to most other sources of ringing. However, dealing with the saturation effect in deblurring introduces a non-linear operator in optimization problem. In this paper, we also introduce a linear approximation as a solution to handling saturation in the proposed deblurring method. As a result of these steps, we significantly enhance the quality of the deblurred images. Experimental results and quantitative evaluations demonstrate that the proposed method performs favorably against state-of-the-art image deblurring methods.
Collapse
|
41
|
Aittala M, Durand F. Burst Image Deblurring Using Permutation Invariant Convolutional Neural Networks. COMPUTER VISION – ECCV 2018 2018. [DOI: 10.1007/978-3-030-01237-3_45] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
42
|
Shen K, Lu H, Baig S, Wang MR. Improving lateral resolution and image quality of optical coherence tomography by the multi-frame superresolution technique for 3D tissue imaging. BIOMEDICAL OPTICS EXPRESS 2017; 8:4887-4918. [PMID: 29188089 PMCID: PMC5695939 DOI: 10.1364/boe.8.004887] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/09/2017] [Revised: 09/18/2017] [Accepted: 09/18/2017] [Indexed: 05/23/2023]
Abstract
The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues.
Collapse
Affiliation(s)
- Kai Shen
- Department of Electrical and Computer Engineering, University of Miami, 1251 Memorial Drive, Coral Gables, FL 33146, USA
| | - Hui Lu
- Department of Electrical and Computer Engineering, University of Miami, 1251 Memorial Drive, Coral Gables, FL 33146, USA
| | - Sarfaraz Baig
- Department of Biomedical Engineering, University of Miami, 1251 Memorial Drive, Coral Gables, FL 33146, USA
| | - Michael R. Wang
- Department of Electrical and Computer Engineering, University of Miami, 1251 Memorial Drive, Coral Gables, FL 33146, USA
| |
Collapse
|
43
|
Kotera J, Smidl V, Sroubek F. Blind Deconvolution With Model Discrepancies. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:2533-2544. [PMID: 28278468 DOI: 10.1109/tip.2017.2676981] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Blind deconvolution is a strongly ill-posed problem comprising of simultaneous blur and image estimation. Recent advances in prior modeling and/or inference methodology led to methods that started to perform reasonably well in real cases. However, as we show here, they tend to fail if the convolution model is violated even in a small part of the image. Methods based on variational Bayesian inference play a prominent role. In this paper, we use this inference in combination with the same prior for noise, image, and blur that belongs to the family of independent non-identical Gaussian distributions, known as the automatic relevance determination prior. We identify several important properties of this prior useful in blind deconvolution, namely, enforcing non-negativity of the blur kernel, favoring sharp images over blurred ones, and most importantly, handling non-Gaussian noise, which, as we demonstrate, is common in real scenarios. The presented method handles discrepancies in the convolution model, and thus extends applicability of blind deconvolution to real scenarios, such as photos blurred by camera motion and incorrect focus.
Collapse
|
44
|
Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring. SENSORS 2017; 17:s17010174. [PMID: 28106764 PMCID: PMC5298747 DOI: 10.3390/s17010174] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Revised: 01/04/2017] [Accepted: 01/04/2017] [Indexed: 11/29/2022]
Abstract
Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L1-norm of kernel intensity and the squared L2-norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L1-norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.
Collapse
|
45
|
Perrone D, Favaro P. A Clearer Picture of Total Variation Blind Deconvolution. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2016; 38:1041-1055. [PMID: 26372205 DOI: 10.1109/tpami.2015.2477819] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Blind deconvolution is the problem of recovering a sharp image and a blur kernel from a noisy blurry image. Recently, there has been a significant effort on understanding the basic mechanisms to solve blind deconvolution. While this effort resulted in the deployment of effective algorithms, the theoretical findings generated contrasting views on why these approaches worked. On the one hand, one could observe experimentally that alternating energy minimization algorithms converge to the desired solution. On the other hand, it has been shown that such alternating minimization algorithms should fail to converge and one should instead use a so-called Variational Bayes approach. To clarify this conundrum, recent work showed that a good image and blur prior is instead what makes a blind deconvolution algorithm work. Unfortunately, this analysis did not apply to algorithms based on total variation regularization. In this manuscript, we provide both analysis and experiments to get a clearer picture of blind deconvolution. Our analysis reveals the very reason why an algorithm based on total variation works. We also introduce an implementation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the top performing algorithms.
Collapse
|
46
|
Wang YF, Kilpatrick J, Jarvis S, Boland F, Kokaram A, Corrigan D. Double-Tip Artefact Removal from Atomic Force Microscopy Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:2774-2788. [PMID: 26915122 DOI: 10.1109/tip.2016.2532239] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The Atomic Force Microscope (AFM) allows the measurement of interactions at interfaces with nanoscale resolution. Imperfections in the shape of the tip often lead to the presence of imaging artefacts such as the blurring and repetition of objects within images. Generally, these artefacts can only be avoided by discarding data and replacing the probe. Under certain circumstances (e.g., rare, high value samples, or extensive chemical/physical tip modification) such an approach is not feasible. Here, we apply a novel deblurring technique, using a Bayesian framework, to yield a reliable estimation of the real surface topography without any prior knowledge of the tip geometry (blind reconstruction). A key contribution is to leverage the significant recently successful body of work in natural image deblurring to solve this problem. We focus specifically on the 'double-tip' effect, where two asperities 1 are present on the tip, each contributing to the image formation mechanism. Finally, we demonstrate that the proposed technique successfully removes the 'double-tip' effect from high resolution AFM images which demonstrate this artefact whilst preserving feature resolution.
Collapse
|
47
|
Lu Q, Zhou W, Fang L, Li H. Robust Blur Kernel Estimation for License Plate Images From Fast Moving Vehicles. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:2311-2323. [PMID: 26955030 DOI: 10.1109/tip.2016.2535375] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
As the unique identification of a vehicle, license plate is a key clue to uncover over-speed vehicles or the ones involved in hit-and-run accidents. However, the snapshot of over-speed vehicle captured by surveillance camera is frequently blurred due to fast motion, which is even unrecognizable by human. Those observed plate images are usually in low resolution and suffer severe loss of edge information, which cast great challenge to existing blind deblurring methods. For license plate image blurring caused by fast motion, the blur kernel can be viewed as linear uniform convolution and parametrically modeled with angle and length. In this paper, we propose a novel scheme based on sparse representation to identify the blur kernel. By analyzing the sparse representation coefficients of the recovered image, we determine the angle of the kernel based on the observation that the recovered image has the most sparse representation when the kernel angle corresponds to the genuine motion angle. Then, we estimate the length of the motion kernel with Radon transform in Fourier domain. Our scheme can well handle large motion blur even when the license plate is unrecognizable by human. We evaluate our approach on real-world images and compare with several popular state-of-the-art blind image deblurring algorithms. Experimental results demonstrate the superiority of our proposed approach in terms of effectiveness and robustness.
Collapse
|
48
|
D'Andres L, Salvador J, Kochale A, Susstrunk S. Non-Parametric Blur Map Regression for Depth of Field Extension. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:1660-1673. [PMID: 26886992 DOI: 10.1109/tip.2016.2526907] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Real camera systems have a limited depth of field (DOF) which may cause an image to be degraded due to visible misfocus or too shallow DOF. In this paper, we present a blind deblurring pipeline able to restore such images by slightly extending their DOF and recovering sharpness in regions slightly out of focus. To address this severely ill-posed problem, our algorithm relies first on the estimation of the spatially varying defocus blur. Drawing on local frequency image features, a machine learning approach based on the recently introduced regression tree fields is used to train a model able to regress a coherent defocus blur map of the image, labeling each pixel by the scale of a defocus point spread function. A non-blind spatially varying deblurring algorithm is then used to properly extend the DOF of the image. The good performance of our algorithm is assessed both quantitatively, using realistic ground truth data obtained with a novel approach based on a plenoptic camera, and qualitatively with real images.
Collapse
|
49
|
Abstract
Transcription factors (TFs) play a central role in regulating gene expression in all bacteria. Yet until recently, studies of TF binding were limited to a small number of factors at a few genomic locations. Chromatin immunoprecipitation followed by sequencing (ChIP-Seq) provides the ability to map binding sites globally for TFs, and the scalability of the technology enables the ability to map binding sites for every DNA binding protein in a prokaryotic organism. We have developed a protocol for ChIP-Seq tailored for use with mycobacteria and an analysis pipeline for processing the resulting data. The protocol and pipeline have been used to map over 100 TFs from Mycobacterium tuberculosis, as well as numerous TFs from related mycobacteria and other bacteria. The resulting data provide evidence that the long-accepted spatial relationship between TF binding site, promoter motif, and the corresponding regulated gene may be too simple a paradigm, failing to adequately capture the variety of TF binding sites found in prokaryotes. In this article we describe the protocol and analysis pipeline, the validation of these methods, and the results of applying these methods to M. tuberculosis.
Collapse
|
50
|
Tian D, Tao D. Coupled Learning for Facial Deblur. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:961-972. [PMID: 26685244 DOI: 10.1109/tip.2015.2509418] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Blur in facial images significantly impedes the efficiency of recognition approaches. However, most existing blind deconvolution methods cannot generate satisfactory results due to their dependence on strong edges, which are sufficient in natural images but not in facial images. In this paper, we represent point spread functions (PSFs) by the linear combination of a set of pre-defined orthogonal PSFs, and similarly, an estimated intrinsic (EI) sharp face image is represented by the linear combination of a set of pre-defined orthogonal face images. In doing so, PSF and EI estimation is simplified to discovering two sets of linear combination coefficients, which are simultaneously found by our proposed coupled learning algorithm. To make our method robust to different types of blurry face images, we generate several candidate PSFs and EIs for a test image, and then, a non-blind deconvolution method is adopted to generate more EIs by those candidate PSFs. Finally, we deploy a blind image quality assessment metric to automatically select the optimal EI. Thorough experiments on the facial recognition technology database, extended Yale face database B, CMU pose, illumination, and expression (PIE) database, and face recognition grand challenge database version 2.0 demonstrate that the proposed approach effectively restores intrinsic sharp face images and, consequently, improves the performance of face recognition.
Collapse
|