101
|
Liu B, Xu Z, Wang Q, Niu X, Chan WX, Hadi W, Yap CH. A denoising and enhancing method framework for 4D ultrasound images of human fetal heart. Quant Imaging Med Surg 2021; 11:1567-1585. [PMID: 33816192 PMCID: PMC7930683 DOI: 10.21037/qims-20-818] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 12/13/2020] [Indexed: 11/06/2022]
Abstract
BACKGROUND 4D ultrasound images of human fetal heart are important for medical applications such as evaluation of fetal heart function and early diagnosis of congenital heart diseases. However, due to the high noise and low contrast characteristics in fetal ultrasound images, denoising and enhancements are important. METHODS In this paper, a special method framework for denoising and enhancing is proposed. It consists of a 4D-NLM (non-local means) denoising method for 4D fetal heart ultrasound image sequence, which takes advantage of context similar information in neighboring images to denoise the target image, and an enhancing method called the Adaptive Clipping for Each Histogram Pillar (ACEHP), which is designed to enhance myocardial spaces to distinguish them from blood spaces. RESULTS Denoising and enhancing experiments show that 4D-NLM method has better denoising effect than several classical and state-of-the-art methods such as NLM and WNNM. Similarly, ACEHP method can keep noise level low while enhancing myocardial regions better than several classical and state-of-the-art methods such as CLAHE and SVDDWT. Furthermore, in the volume rendering after the combined "4D-NLM+ACEHP" processing, the cardiac lumen is clear and the boundary is neat. The Entropy value that can be achieved by our method framework (4D-NLM+ACEHP) is 4.84. CONCLUSIONS Our new framework can thus provide important improvements to clinical fetal heart ultrasound images.
Collapse
Affiliation(s)
- Bin Liu
- International School of Information Science & Engineering (DUT-RUISE), Dalian University of Technology, Dalian, China
- Key Lab of Ubiquitous Network and Service Software of Liaoning Province, Dalian University of Technology, Dalian, China
- DUT-RU Co-Research Center of Advanced ICT for Active Life, Dalian University of Technology, Dalian, China
| | - Zhao Xu
- International School of Information Science & Engineering (DUT-RUISE), Dalian University of Technology, Dalian, China
| | - Qifeng Wang
- International School of Information Science & Engineering (DUT-RUISE), Dalian University of Technology, Dalian, China
| | - Xiaolei Niu
- International School of Information Science & Engineering (DUT-RUISE), Dalian University of Technology, Dalian, China
| | - Wei Xuan Chan
- Department of Biomedical Engineering, National University of Singapore, Singapore
| | - Wiputra Hadi
- Department of Biomedical Engineering, National University of Singapore, Singapore
| | - Choon Hwai Yap
- Department of Biomedical Engineering, National University of Singapore, Singapore
- Department of Bioengineering, Imperial College London, UK
| |
Collapse
|
102
|
Han D, Tang Q, Zhang Z, Yuan L, Rakovitis N, Li D, Li J. An Efficient Augmented Lagrange Multiplier Method for Steelmaking and Continuous Casting Production Scheduling. Chem Eng Res Des 2021. [DOI: 10.1016/j.cherd.2021.01.035] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
103
|
Applications of machine vision in pharmaceutical technology: A review. Eur J Pharm Sci 2021; 159:105717. [DOI: 10.1016/j.ejps.2021.105717] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 01/08/2021] [Accepted: 01/11/2021] [Indexed: 02/07/2023]
|
104
|
Yang W, Wang S, Fang Y, Wang Y, Liu J. Band Representation-Based Semi-Supervised Low-Light Image Enhancement: Bridging the Gap Between Signal Fidelity and Perceptual Quality. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:3461-3473. [PMID: 33656992 DOI: 10.1109/tip.2021.3062184] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
It has been widely acknowledged that under-exposure causes a variety of visual quality degradation because of intensive noise, decreased visibility, biased color, etc. To alleviate these issues, a novel semi-supervised learning approach is proposed in this paper for low-light image enhancement. More specifically, we propose a deep recursive band network (DRBN) to recover a linear band representation of an enhanced normal-light image based on the guidance of the paired low/normal-light images. Such design philosophy enables the principled network to generate a quality improved one by reconstructing the given bands based upon another learnable linear transformation which is perceptually driven by an image quality assessment neural network. On one hand, the proposed network is delicately developed to obtain a variety of coarse-to-fine band representations, of which the estimations benefit each other in a recursive process mutually. On the other hand, the extracted band representation of the enhanced image in the recursive band learning stage of DRBN is capable of bridging the gap between the restoration knowledge of paired data and the perceptual quality preference to high-quality images. Subsequently, the band recomposition learns to recompose the band representation towards fitting perceptual regularization of high-quality images with the perceptual guidance. The proposed architecture can be flexibly trained with both paired and unpaired data. Extensive experiments demonstrate that our method produces better enhanced results with visually pleasing contrast and color distributions, as well as well-restored structural details.
Collapse
|
105
|
Rahim A, Maqbool A, Rana T. Monitoring social distancing under various low light conditions with deep learning and a single motionless time of flight camera. PLoS One 2021; 16:e0247440. [PMID: 33630951 PMCID: PMC7906321 DOI: 10.1371/journal.pone.0247440] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 02/06/2021] [Indexed: 11/19/2022] Open
Abstract
The purpose of this work is to provide an effective social distance monitoring solution in low light environments in a pandemic situation. The raging coronavirus disease 2019 (COVID-19) caused by the SARS-CoV-2 virus has brought a global crisis with its deadly spread all over the world. In the absence of an effective treatment and vaccine the efforts to control this pandemic strictly rely on personal preventive actions, e.g., handwashing, face mask usage, environmental cleaning, and most importantly on social distancing which is the only expedient approach to cope with this situation. Low light environments can become a problem in the spread of disease because of people's night gatherings. Especially, in summers when the global temperature is at its peak, the situation can become more critical. Mostly, in cities where people have congested homes and no proper air cross-system is available. So, they find ways to get out of their homes with their families during the night to take fresh air. In such a situation, it is necessary to take effective measures to monitor the safety distance criteria to avoid more positive cases and to control the death toll. In this paper, a deep learning-based solution is proposed for the above-stated problem. The proposed framework utilizes the you only look once v4 (YOLO v4) model for real-time object detection and the social distance measuring approach is introduced with a single motionless time of flight (ToF) camera. The risk factor is indicated based on the calculated distance and safety distance violations are highlighted. Experimental results show that the proposed model exhibits good performance with 97.84% mean average precision (mAP) score and the observed mean absolute error (MAE) between actual and measured social distance values is 1.01 cm.
Collapse
Affiliation(s)
- Adina Rahim
- Department of Computer Software Engineering, NUST, Islamabad, Pakistan
| | - Ayesha Maqbool
- Department of Computer Software Engineering, NUST, Islamabad, Pakistan
| | - Tauseef Rana
- Department of Computer Software Engineering, NUST, Islamabad, Pakistan
| |
Collapse
|
106
|
|
107
|
Jiang Y, Gong X, Liu D, Cheng Y, Fang C, Shen X, Yang J, Zhou P, Wang Z. EnlightenGAN: Deep Light Enhancement Without Paired Supervision. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:2340-2349. [PMID: 33481709 DOI: 10.1109/tip.2021.3051462] [Citation(s) in RCA: 146] [Impact Index Per Article: 36.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deep learning-based methods have achieved remarkable success in image restoration and enhancement, but are they still competitive when there is a lack of paired training data? As one such example, this paper explores the low-light image enhancement problem, where in practice it is extremely challenging to simultaneously take a low-light and a normal-light photo of the same visual scene. We propose a highly effective unsupervised generative adversarial network, dubbed EnlightenGAN, that can be trained without low/normal-light image pairs, yet proves to generalize very well on various real-world test images. Instead of supervising the learning using ground truth data, we propose to regularize the unpaired training using the information extracted from the input itself, and benchmark a series of innovations for the low-light image enhancement problem, including a global-local discriminator structure, a self-regularized perceptual loss fusion, and the attention mechanism. Through extensive experiments, our proposed approach outperforms recent methods under a variety of metrics in terms of visual quality and subjective user study. Thanks to the great flexibility brought by unpaired training, EnlightenGAN is demonstrated to be easily adaptable to enhancing real-world images from various domains. Our codes and pre-trained models are available at: https://github.com/VITA-Group/EnlightenGAN.
Collapse
|
108
|
Yang W, Wang W, Huang H, Wang S, Liu J. Sparse Gradient Regularized Deep Retinex Network for Robust Low-Light Image Enhancement. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:2072-2086. [PMID: 33460379 DOI: 10.1109/tip.2021.3050850] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Due to the absence of a desirable objective for low-light image enhancement, previous data-driven methods may provide undesirable enhanced results including amplified noise, degraded contrast and biased colors. In this work, inspired by Retinex theory, we design an end-to-end signal prior-guided layer separation and data-driven mapping network with layer-specified constraints for single-image low-light enhancement. A Sparse Gradient Minimization sub-Network (SGM-Net) is constructed to remove the low-amplitude structures and preserve major edge information, which facilitates extracting paired illumination maps of low/normal-light images. After the learned decomposition, two sub-networks (Enhance-Net and Restore-Net) are utilized to predict the enhanced illumination and reflectance maps, respectively, which helps stretch the contrast of the illumination map and remove intensive noise in the reflectance map. The effects of all these configured constraints, including the signal structure regularization and losses, combine together reciprocally, which leads to good reconstruction results in overall visual quality. The evaluation on both synthetic and real images, particularly on those containing intensive noise, compression artifacts and their interleaved artifacts, shows the effectiveness of our novel models, which significantly outperforms the state-of-the-art methods.
Collapse
|
109
|
|
110
|
Attention Guided Retinex Architecture Search for Robust Low-light Image Enhancement. ARTIF INTELL 2021. [DOI: 10.1007/978-3-030-93046-2_38] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
111
|
Brightening the Low-Light Images via a Dual Guided Network. ARTIF INTELL 2021. [DOI: 10.1007/978-3-030-93046-2_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
112
|
Veluchamy M, Bhandari AK, Subramani B. Optimized Bezier Curve Based Intensity Mapping Scheme for Low Light Image Enhancement. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2021. [DOI: 10.1109/tetci.2021.3053253] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
113
|
Huang Z, Tang C, Xu M, Shen Y, Lei Z. Both speckle reduction and contrast enhancement for optical coherence tomography via sequential optimization in the logarithmic domain based on a refined Retinex model. APPLIED OPTICS 2020; 59:11087-11097. [PMID: 33361937 DOI: 10.1364/ao.405981] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 11/19/2020] [Indexed: 06/12/2023]
Abstract
Optical coherence tomography (OCT) image enhancement is a challenging task because speckle reduction and contrast enhancement need to be addressed simultaneously and effectively. We present a refined Retinex model for guidance in improving the performance of enhancing OCT images accompanied by speckle noise; a physical explanation is provided. Based on this model, we establish two sequential optimization functions in the logarithmic domain for speckle reduction and contrast enhancement, respectively. More specifically, we obtain the despeckled image of an entire OCT image by solving the first optimization function. Incidentally, we can recover the speckle noise map through removing the despeckle component directly. Then, we estimate the illumination and reflectance by solving the second optimization function. Further, we apply the contrast-limited adaptive histogram equalization algorithm to adjust the illumination, and project it back to the reflectance for achieving contrast enhancement. Experimental results demonstrate the robustness and effectiveness of our proposed method. It performs well in both speckle reduction and contrast enhancement and is superior to the other two methods both in terms of qualitative analysis and quantitative assessment. Our method has the practical potential to improve the accuracy of manual screening and computer-aided diagnosis for retinal diseases.
Collapse
|
114
|
|
115
|
Ni Z, Yang W, Wang S, Ma L, Kwong S. Towards Unsupervised Deep Image Enhancement with Generative Adversarial Network. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; PP:9140-9151. [PMID: 32960763 DOI: 10.1109/tip.2020.3023615] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Improving the aesthetic quality of images is challenging and eager for the public. To address this problem, most existing algorithms are based on supervised learning methods to learn an automatic photo enhancer for paired data, which consists of low-quality photos and corresponding expert-retouched versions. However, the style and characteristics of photos retouched by experts may not meet the needs or preferences of general users. In this paper, we present an unsupervised image enhancement generative adversarial network (UEGAN), which learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner, rather than learning on a large number of paired images. The proposed model is based on single deep GAN which embeds the modulation and attention mechanisms to capture richer global and local features. Based on the proposed model, we introduce two losses to deal with the unsupervised image enhancement: (1) fidelity loss, which is defined as a l2 regularization in the feature domain of a pre-trained VGG network to ensure the content between the enhanced image and the input image is the same, and (2) quality loss that is formulated as a relativistic hinge adversarial loss to endow the input image the desired characteristics. Both quantitative and qualitative results show that the proposed model effectively improves the aesthetic quality of images.
Collapse
|
116
|
Low-light enhancement based on an improved simplified Retinex model via fast illumination map refinement. Pattern Anal Appl 2020. [DOI: 10.1007/s10044-020-00908-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
117
|
He R, Guo X, Shi Z. SIDE-A Unified Framework for Simultaneously Dehazing and Enhancement of Nighttime Hazy Images. SENSORS 2020; 20:s20185300. [PMID: 32947978 PMCID: PMC7570461 DOI: 10.3390/s20185300] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 08/29/2020] [Accepted: 09/08/2020] [Indexed: 11/16/2022]
Abstract
Single image dehazing is a difficult problem because of its ill-posed nature. Increasing attention has been paid recently as its high potential applications in many visual tasks. Although single image dehazing has made remarkable progress in recent years, they are mainly designed for haze removal in daytime. In nighttime, dehazing is more challenging where most daytime dehazing methods become invalid due to multiple scattering phenomena, and non-uniformly distributed dim ambient illumination. While a few approaches have been proposed for nighttime image dehazing, low ambient light is actually ignored. In this paper, we propose a novel unified nighttime hazy image enhancement framework to address the problems of both haze removal and illumination enhancement simultaneously. Specifically, both halo artifacts caused by multiple scattering and non-uniformly distributed ambient illumination existing in low-light hazy conditions are considered for the first time in our approach. More importantly, most current daytime dehazing methods can be effectively incorporated into nighttime dehazing task based on our framework. Firstly, we decompose the observed hazy image into a halo layer and a scene layer to remove the influence of multiple scattering. After that, we estimate the spatially varying ambient illumination based on the Retinex theory. We then employ the classic daytime dehazing methods to recover the scene radiance. Finally, we generate the dehazing result by combining the adjusted ambient illumination and the scene radiance. Compared with various daytime dehazing methods and the state-of-the-art nighttime dehazing methods, both quantitative and qualitative experimental results on both real-world and synthetic hazy image datasets demonstrate the superiority of our framework in terms of halo mitigation, visibility improvement and color preservation.
Collapse
Affiliation(s)
- Renjie He
- School of Automation, Northwestern Polytechnical University, Xi’an 710129, China;
- Correspondence:
| | - Xintao Guo
- School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710129, China;
| | - Zhongke Shi
- School of Automation, Northwestern Polytechnical University, Xi’an 710129, China;
| |
Collapse
|
118
|
Zhao Y, Zhang J, Pereira E, Zheng Y, Su P, Xie J, Zhao Y, Shi Y, Qi H, Liu J, Liu Y. Automated Tortuosity Analysis of Nerve Fibers in Corneal Confocal Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2725-2737. [PMID: 32078542 DOI: 10.1109/tmi.2020.2974499] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Precise characterization and analysis of corneal nerve fiber tortuosity are of great importance in facilitating examination and diagnosis of many eye-related diseases. In this paper we propose a fully automated method for image-level tortuosity estimation, comprising image enhancement, exponential curvature estimation, and tortuosity level classification. The image enhancement component is based on an extended Retinex model, which not only corrects imbalanced illumination and improves image contrast in an image, but also models noise explicitly to aid removal of imaging noise. Afterwards, we take advantage of exponential curvature estimation in the 3D space of positions and orientations to directly measure curvature based on the enhanced images, rather than relying on the explicit segmentation and skeletonization steps in a conventional pipeline usually with accumulated pre-processing errors. The proposed method has been applied over two corneal nerve microscopy datasets for the estimation of a tortuosity level for each image. The experimental results show that it performs better than several selected state-of-the-art methods. Furthermore, we have performed manual gradings at tortuosity level of four hundred and three corneal nerve microscopic images, and this dataset has been released for public access to facilitate other researchers in the community in carrying out further research on the same and related topics.
Collapse
|
119
|
Detection of Neurological and Ophthalmological Pathologies with Optical Coherence Tomography Using Retinal Thickness Measurements: A Bibliometric Study. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10165477] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
We carry out a bibliometric analysis on neurological and ophthalmological pathologies based on retinal nerve fiber layer (RNFL) thickness measured with optical coherence tomography (OCT). Documents were selected from Scopus database. We have applied the most commonly used bibliometric indicators, both for production and dispersion, as Price’s law of scientific literature growth, Lotka’s law, the transient index, and the Bradford model. Finally, the participation index of the different countries and affiliations was calculated. Two-hundred-and-forty-one documents from the period 2000–2019 were retrieved. Scientific production was better adjusted to linear growth (r = 0.88) than exponential (r = 0.87). The duplication time of the documents obtained was 5.6 years. The transience index was 89.62%, which indicates that most of the scientific production is due to very few authors. The signature rate per document was 5.2. Nine journals made up the Bradford core. USA and University of California present the highest production. The most frequently discussed topics on RNFL thinning are glaucoma and neurodegenerative diseases (NDD). The growth of the scientific literature on RNFL thickness was linear, with a very high rate of transience, which indicates low productivity and the presence of numerous authors who sporadically publish on this topic. No evidence of a saturation point was observed. In the last 10 years, there has been an increase in documents relating the decline of RNFL to NDD.
Collapse
|
120
|
Abstract
Recent advances in deep learning have shown exciting promise in various artificial intelligence vision tasks, such as image classification, image noise reduction, object detection, semantic segmentation, and more. The restoration of the image captured in an extremely dark environment is one of the subtasks in computer vision. Some of the latest progress in this field depends on sophisticated algorithms and massive image pairs taken in low-light and normal-light conditions. However, it is difficult to capture pictures of the same size and the same location under two different light level environments. We propose a method named NL2LL to collect the underexposure images and the corresponding normal exposure images by adjusting camera settings in the “normal” level of light during the daytime. The normal light of the daytime provides better conditions for taking high-quality image pairs quickly and accurately. Additionally, we describe the regularized denoising autoencoder is effective for restoring a low-light image. Due to high-quality training data, the proposed restoration algorithm achieves superior results for images taken in an extremely low-light environment (about 100× underexposure). Our algorithm surpasses most contrasted methods solely relying on a small amount of training data, 20 image pairs. The experiment also shows the model adapts to different brightness environments.
Collapse
|
121
|
You Q, Wan C, Sun J, Shen J, Ye H, Yu Q. Fundus Image Enhancement Method Based on CycleGAN. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:4500-4503. [PMID: 31946865 DOI: 10.1109/embc.2019.8856950] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
In this paper, we propose a retinal image enhancement method, called Cycle-CBAM, which is based on CycleGAN to realize the migration from poor quality fundus images to good quality fundus images. It does not require paired training set any more, that is critical since it is quite difficult to obtain paired medical images. In order to solve the degeneration of texture and detail caused by training unpaired images, we enhance the CycleGAN by adopting the Convolutional Block Attention Module (CBAM). To verify the enhancement effect of our method, we not only analyzed the enhanced fundus image quantitatively and qualitatively, but also introduced a diabetic retinopathy (DR) classification module to evaluate the DR level of the fundus images before and after enhancement. The experiments show that our method of integrating CBAM into CycleGAN has superior performance than CycleGAN both in quantitative and qualitative results.
Collapse
|
122
|
Ren X, Yang W, Cheng WH, Liu J. LR3M: Robust Low-Light Enhancement via Low-Rank Regularized Retinex Model. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:5862-5876. [PMID: 32286975 DOI: 10.1109/tip.2020.2984098] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Noise causes unpleasant visual effects in low-light image/video enhancement. In this paper, we aim to make the enhancement model and method aware of noise in the whole process. To deal with heavy noise which is not handled in previous methods, we introduce a robust low-light enhancement approach, aiming at well enhancing low-light images/videos and suppressing intensive noise jointly. Our method is based on the proposed Low-Rank Regularized Retinex Model (LR3M), which is the first to inject low-rank prior into a Retinex decomposition process to suppress noise in the reflectance map. Our method estimates a piece-wise smoothed illumination and a noise-suppressed reflectance sequentially, avoiding remaining noise in the illumination and reflectance maps which are usually presented in alternative decomposition methods. After getting the estimated illumination and reflectance, we adjust the illumination layer and generate our enhancement result. Furthermore, we apply our LR3M to video low-light enhancement. We consider inter-frame coherence of illumination maps and find similar patches through reflectance maps of successive frames to form the low-rank prior to make use of temporal correspondence. Our method performs well for a wide variety of images and videos, and achieves better quality both in enhancing and denoising, compared with the state-of-the-art methods.
Collapse
|
123
|
Yang W, Yuan Y, Ren W, Liu J, Scheirer WJ, Wang Z, Zhang T, Zhong Q, Xie D, Pu S, Zheng Y, Qu Y, Xie Y, Chen L, Li Z, Hong C, Jiang H, Yang S, Liu Y, Qu X, Wan P, Zheng S, Zhong M, Su T, He L, Guo Y, Zhao Y, Zhu Z, Liang J, Wang J, Chen T, Quan Y, Xu Y, Liu B, Liu X, Sun Q, Lin T, Li X, Lu F, Gu L, Zhou S, Cao C, Zhang S, Chi C, Zhuang C, Lei Z, Li SZ, Wang S, Liu R, Yi D, Zuo Z, Chi J, Wang H, Wang K, Liu Y, Gao X, Chen Z, Guo C, Li Y, Zhong H, Huang J, Guo H, Yang J, Liao W, Yang J, Zhou L, Feng M, Qin L. Advancing Image Understanding in Poor Visibility Environments: A Collective Benchmark Study. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:5737-5752. [PMID: 32224457 DOI: 10.1109/tip.2020.2981922] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Existing enhancement methods are empirically expected to help the high-level end computer vision task: however, that is observed to not always be the case in practice. We focus on object or face detection in poor visibility enhancements caused by bad weathers (haze, rain) and low light conditions. To provide a more thorough examination and fair comparison, we introduce three benchmark sets collected in real-world hazy, rainy, and low-light conditions, respectively, with annotated objects/faces. We launched the UG2+ challenge Track 2 competition in IEEE CVPR 2019, aiming to evoke a comprehensive discussion and exploration about whether and how low-level vision techniques can benefit the high-level automatic visual recognition in various scenarios. To our best knowledge, this is the first and currently largest effort of its kind. Baseline results by cascading existing enhancement and detection models are reported, indicating the highly challenging nature of our new data as well as the large room for further technical innovations. Thanks to a large participation from the research community, we are able to analyze representative team solutions, striving to better identify the strengths and limitations of existing mindsets as well as the future directions.
Collapse
|
124
|
Low-Light Image Enhancement Based on Deep Symmetric Encoder–Decoder Convolutional Networks. Symmetry (Basel) 2020. [DOI: 10.3390/sym12030446] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
A low-light image enhancement method based on a deep symmetric encoder–decoder convolutional network (LLED-Net) is proposed in the paper. In surveillance and tactical reconnaissance, collecting visual information from a dynamic environment and accurately processing that data is critical to making the right decisions and ensuring mission success. However, due to the cost and technical limitations of camera sensors, it is difficult to capture clear images or videos in low-light conditions. In this paper, a special encoder–decoder convolution network is designed to utilize multi-scale feature maps and join jump connections to avoid gradient disappearance. In order to preserve the image texture as much as possible, by using structural similarity (SSIM) loss to train the model on the data sets with different brightness level, the model can adaptively enhance low-light images in low-light environments. The results show that the proposed algorithm provides significant improvements in quantitative comparison with RED-Net and several other representative image enhancement algorithms.
Collapse
|
125
|
Xu J, Hou Y, Ren D, Liu L, Zhu F, Yu M, Wang H, Shao L. STAR: A Structure and Texture Aware Retinex Model. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:5022-5037. [PMID: 32167892 DOI: 10.1109/tip.2020.2974060] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Retinex theory is developed mainly to decompose an image into the illumination and reflectance components by analyzing local image derivatives. In this theory, larger derivatives are attributed to the changes in reflectance, while smaller derivatives are emerged in the smooth illumination. In this paper, we utilize exponentiated local derivatives (with an exponent γ) of an observed image to generate its structure map and texture map. The structure map is produced by been amplified with γ > 1, while the texture map is generated by been shrank with γ < 1. To this end, we design exponential filters for the local derivatives, and present their capability on extracting accurate structure and texture maps, influenced by the choices of exponents γ. The extracted structure and texture maps are employed to regularize the illumination and reflectance components in Retinex decomposition. A novel Structure and Texture Aware Retinex (STAR) model is further proposed for illumination and reflectance decomposition of a single image. We solve the STAR model by an alternating optimization algorithm. Each sub-problem is transformed into a vectorized least squares regression, with closed-form solutions. Comprehensive experiments on commonly tested datasets demonstrate that, the proposed STAR model produce better quantitative and qualitative performance than previous competing methods, on illumination and reflectance decomposition, low-light image enhancement, and color correction. The code is publicly available at https://github.com/csjunxu/STAR.
Collapse
|
126
|
Zhang XS, Yang KF, Zhou J, Li YJ. Retina inspired tone mapping method for high dynamic range images. OPTICS EXPRESS 2020; 28:5953-5964. [PMID: 32225854 DOI: 10.1364/oe.380555] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 02/10/2020] [Indexed: 06/10/2023]
Abstract
The limited dynamic range of regular screens restricts the display of high dynamic range (HDR) images. Inspired by retinal processing mechanisms, we propose a tone mapping method to address this problem. In the retina, horizontal cells (HCs) adaptively adjust their receptive field (RF) size based on the local stimuli to regulate the visual signals absorbed by photoreceptors. Using this adaptive mechanism, the proposed method compresses the dynamic range locally in different regions, and has the capability of avoiding halo artifacts around the edges of high luminance contrast. Moreover, the proposed method introduces the center-surround antagonistic RF structure of bipolar cells (BCs) to enhance the local contrast and details. Extensive experiments show that the proposed method performs robustly well on a wide variety of images, providing competitive results against the state-of-the-art methods in terms of visual inspection, objective metrics and observer scores.
Collapse
|
127
|
Okuwobi IP, Shen Y, Li M, Fan W, Yuan S, Chen Q. Hyperreflective Foci Enhancement in a Combined Spatial-Transform Domain for SD-OCT Images. Transl Vis Sci Technol 2020; 9:19. [PMID: 32714645 PMCID: PMC7352042 DOI: 10.1167/tvst.9.3.19] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Spectral-domain optical coherent tomography (SD-OCT) is a useful tool for visualizing, treating, and monitoring retinal abnormality in patients with different retinal diseases. However, the assessment of SD-OCT images is thwarted by the lack of image quality necessary for ophthalmologists to analyze and quantify the diseases. This has hindered the potential role of hyperreflective foci (HRF) as a prognostic indicator of visual outcome in patients with retinal diseases. We present a new multi-vendor algorithm that is robust to noise while enhancing the HRF in SD-OCT images. Methods The proposed algorithm processes the SD-OCT images in two parallel processes simultaneously. The two parallel processes are combined by histogram matching. An inverse of both logarithmic and orthogonal transforms is applied to the mapped data to produce the enhanced image. Results We evaluated our algorithm on a dataset composed of 40 SD-OCT volumes. The proposed method obtained high values for the measure of enhancement, peak signal-to-noise ratio, structure similarity, and correlation (ρ) and a low value for mean square error of 36.72, 38.87, 0.87, 0.98, and 25.12 for Cirrus; 40.77, 41.84, 0.89, 0.98, and 22.15 for Spectralis; and 30.81, 32.10, 0.81, 0.96, and 28.55 for Topcon SD-OCT devices, respectively. Conclusions The proposed algorithm can be used in the medical field to assist ophthalmologists and in the preprocessing of medical images. Translational Relevance The proposed enhancement algorithm facilitates the visualization and detection of HRF, which is a step forward in assisting clinicians with decision making about patient treatment planning and disease monitoring.
Collapse
Affiliation(s)
- Idowu Paul Okuwobi
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Yifei Shen
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China.,Su Zhou Ninth People's Hospital, Suzhou, China
| | - Mingchao Li
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Wen Fan
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China.,The Affiliated Shengze Hospital of Nanjing Medical University, Suzhou, China
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| |
Collapse
|
128
|
Improved PSO Algorithm Based on Exponential Center Symmetric Inertia Weight Function and Its Application in Infrared Image Enhancement. Symmetry (Basel) 2020. [DOI: 10.3390/sym12020248] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In this paper, an improved PSO (Particle Swarm Optimization) algorithm is proposed and applied to the infrared image enhancement. The contrast of infrared image is enhanced while the image details are preserved. A new exponential center symmetry inertia weight function is constructed and the local optimal solution jumping mechanism is introduced to make the algorithm consider both global search and local search. A new image enhancement method is proposed based on the advantages of bi-histogram equalization algorithm and dual-domain image decomposition algorithm. The fitness function is constructed by using five kinds of image quality evaluation factors, and the parameters are optimized by the proposed PSO algorithm, so that the parameters are determined to enhance the image. Experiments showed that the proposed PSO algorithm has good performance, and the proposed image enhancement method can not only improve the contrast of the image, but also preserve the details of the image, which has a good visual effect.
Collapse
|
129
|
Ai S, Kwon J. Extreme Low-Light Image Enhancement for Surveillance Cameras Using Attention U-Net. SENSORS 2020; 20:s20020495. [PMID: 31952325 PMCID: PMC7014524 DOI: 10.3390/s20020495] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Revised: 01/04/2020] [Accepted: 01/10/2020] [Indexed: 11/22/2022]
Abstract
Low-light image enhancement is one of the most challenging tasks in computer vision, and it is actively researched and used to solve various problems. Most of the time, image processing achieves significant performance under normal lighting conditions. However, under low-light conditions, an image turns out to be noisy and dark, which makes subsequent computer vision tasks difficult. To make buried details more visible, and reduce blur and noise in a low-light captured image, a low-light image enhancement task is necessary. A lot of research has been applied to many different techniques. However, most of these approaches require much effort or expensive equipment to perform low-light image enhancement. For example, the image has to be captured in a raw camera file in order to be processed, and the addressing method does not perform well under extreme low-light conditions. In this paper, we propose a new convolutional network, Attention U-net (the integration of an attention gate and a U-net network), which is able to work on common file types (.PNG, .JPEG, .JPG, etc.) with primary support from deep learning to solve the problem of surveillance camera security in smart city inducements without requiring the raw image file from the camera, and it can perform under the most extreme low-light conditions.
Collapse
|
130
|
Gu Z, Li F, Fang F, Zhang G. A Novel Retinex-Based Fractional-Order Variational Model for Images with Severely Low Light. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:3239-3253. [PMID: 31841409 DOI: 10.1109/tip.2019.2958144] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this paper, we propose a novel Retinex-based fractional-order variational model for severely low-light images. The proposed method is more flexible in controlling the regularization extent than the existing integer-order regularization methods. Specifically, we decompose directly in the image domain and perform the fractional-order gradient total variation regularization on both the reflectance component and the illumination component to get more appropriate estimated results. The merits of the proposed method are as follows: 1) small-magnitude details are maintained in the estimated reflectance. 2) illumination components are effectively removed from the estimated reflectance. 3) the estimated illumination is more likely piecewise smooth. We compare the proposed method with other closely related Retinex-based methods. Experimental results demonstrate the effectiveness of the proposed method.
Collapse
|
131
|
Improved Bilateral Filtering for a Gaussian Pyramid Structure-Based Image Enhancement Algorithm. ALGORITHMS 2019. [DOI: 10.3390/a12120258] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
To address the problem of unclear images affected by occlusion from fog, we propose an improved Retinex image enhancement algorithm based on the Gaussian pyramid transformation. Our algorithm features bilateral filtering as a replacement for the Gaussian function used in the original Retinex algorithm. Operation of the technique is as follows. To begin, we deduced the mathematical model for an improved bilateral filtering function based on the spatial domain kernel function and the pixel difference parameter. The input RGB image was subsequently converted into the Hue Saturation Intensity (HSI) color space, where the reflection component of the intensity channel was extracted to obtain an image whose edges were retained and are not affected by changes in brightness. Following reconversion to the RGB color space, color images of this reflection component were obtained at different resolutions using Gaussian pyramid down-sampling. Each of these images was then processed using the improved Retinex algorithm to improve the contrast of the final image, which was reconstructed using the Laplace algorithm. Results from experiments show that the proposed algorithm can enhance image contrast effectively, and the color of the processed image is in line with what would be perceived by a human observer.
Collapse
|
132
|
Wang YF, Liu HM, Fu ZW. Low-Light Image Enhancement via the Absorption Light Scattering Model. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:5679-5690. [PMID: 31217118 DOI: 10.1109/tip.2019.2922106] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Low light often leads to poor image visibility, which can easily affect the performance of computer vision algorithms. First, this paper proposes the absorption light scattering model (ALSM), which can be used to reasonably explain the absorbed light imaging process for low-light images. In addition, the absorbing light scattering image obtained via ALSM under a sufficient and uniform illumination can reproduce hidden outlines and details from the low-light image. Then, we identify that the minimum channel of ALSM obtained above exhibits high local similarity. This similarity can be constrained by superpixels, which effectively prevent the use of gradient operations at the edges so that the noise is not amplified quickly during enhancement. Finally, by analyzing the monotonicity between the scene reflection and the atmospheric light or transmittance in ALSM, a new low-light image enhancement method is identified. We replace atmospheric light with inverted atmospheric light to reduce the contribution of atmospheric light in the imaging results. Moreover, a soft jointed mean-standard-deviation (MSD) mechanism is proposed that directly acts on the patches represented by the superpixels. The MSD can obtain a smaller transmittance than that obtained by the minimum strategy, and it can be automatically adjusted according to the information of the image. The experiments on challenging low-light images are conducted to reveal the advantages of our method compared with other powerful techniques.
Collapse
|
133
|
Yang KF, Zhang XS, Li YJ. A Biological Vision Inspired Framework for Image Enhancement in Poor Visibility Conditions. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:1493-1506. [PMID: 31562084 DOI: 10.1109/tip.2019.2938310] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Image enhancement is an important pre-processing step for many computer vision applications especially regarding the scenes in poor visibility conditions. In this work, we develop a unified two-pathway model inspired by the biological vision, especially the early visual mechanisms, which contributes to image enhancement tasks including low dynamic range (LDR) image enhancement and high dynamic range (HDR) image tone mapping. Firstly, the input image is separated and sent into two visual pathways: structure-pathway and detail-pathway, corresponding to the M-and P-pathway in the early visual system, which code the low-and high-frequency visual information, respectively. In the structure-pathway, an extended biological normalization model is used to integrate the global and local luminance adaptation, which can handle the visual scenes with varying illuminations. On the other hand, the detail enhancement and local noise suppression are achieved in the detail-pathway based on local energy weighting. Finally, the outputs of structure-and detail-pathway are integrated to achieve the low-light image enhancement. In addition, the proposed model can also be used for tone mapping of HDR images with some fine-tuning steps. Extensive experiments on three datasets (two LDR image datasets and one HDR scene dataset) show that the proposed model can handle the visual enhancement tasks mentioned above efficiently and outperform the related state-of-the-art methods.
Collapse
|
134
|
A joint deep neural networks-based method for single nighttime rainy image enhancement. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04501-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
135
|
Ghosh S, Chaudhury KN. Fast Bright-Pass Bilateral Filtering for Low-Light Enhancement. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) 2019. [DOI: 10.1109/icip.2019.8802986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
|
136
|
Autonomous robot navigation using Retinex algorithm for multiscale image adaptability in low-light environment. INTEL SERV ROBOT 2019. [DOI: 10.1007/s11370-019-00287-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
137
|
Prasath R, Kumanan T. Distance-Oriented Cuckoo Search enabled optimal histogram for underwater image enhancement: a novel quality metric analysis. THE IMAGING SCIENCE JOURNAL 2018. [DOI: 10.1080/13682199.2018.1552356] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- R. Prasath
- Department of CSE, Meenakshi Academic of Higher Education and Research, Chennai, India
| | - T. Kumanan
- Department of CSE, Meenakshi Academic of Higher Education and Research, Chennai, India
| |
Collapse
|
138
|
Long M, Li Z, Xie X, Li G, Wang Z. Adaptive Image Enhancement Based on Guide Image and Fraction-Power Transformation for Wireless Capsule Endoscopy. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2018; 12:993-1003. [PMID: 30346276 DOI: 10.1109/tbcas.2018.2869530] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Good image quality of the wireless capsule endoscopy (WCE) is the key for doctors to diagnose gastrointestinal (GI) tract diseases. However, the poor illumination, limited performance of the camera in WCE, and complex environment in the GI tract usually result in low-quality endoscopic images. Existing image enhancement methods only use the information of the image itself or multiple images of the same scene to accomplish the enhancement. In this paper, we propose an adaptive image enhancement method based on guide image and fraction-power transformation. First, intensities of endoscopic images are analyzed to assess the illumination conditions. Second, images captured under poor illumination conditions are enhanced by a brand-new image enhancement method called adaptive guide image based enhancement (AGIE). AGIE enhances low-quality images by using the information of a good quality image of the similar scene. Otherwise, images are enhanced by the proposed adaptive fraction-power transformation. Experimental results show that the proposed method improves the average intensity of endoscopic images by 64.20% and the average local entropy by 31.25%, which outperforms the state-of-art methods.
Collapse
|
139
|
Abstract
The purpose of sea image enhancement is to enhance the information of the waves, whose contrast is generally weak. Enhancement effect is often affected by impulse-type noise and non-uniform illumination. In this paper, we propose a variational model for sea image enhancement using a solar halo model and a Retinex model. This paper mainly makes the following three contributions: 1. Establishing a Retinex model with noise suppression ability in sea images; 2. Establishing a solar-scattering halo model through sea image bitplane analysis; 3. Proposing a variational enhancement model combining the Retinex and halo models. The experimental results show that our method has a significant enhancement effect on sea surface images in different illumination environments compared with typical methods.
Collapse
|