1
|
Shen W, Zhou M, Luo J, Li Z, Kwong S. Graph-Represented Distribution Similarity Index for Full-Reference Image Quality Assessment. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:3075-3089. [PMID: 38656839 DOI: 10.1109/tip.2024.3390565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
In this paper, we propose a graph-represented image distribution similarity (GRIDS) index for full-reference (FR) image quality assessment (IQA), which can measure the perceptual distance between distorted and reference images by assessing the disparities between their distribution patterns under a graph-based representation. First, we transform the input image into a graph-based representation, which is proven to be a versatile and effective choice for capturing visual perception features. This is achieved through the automatic generation of a vision graph from the given image content, leading to holistic perceptual associations for irregular image regions. Second, to reflect the perceived image distribution, we decompose the undirected graph into cliques and then calculate the product of the potential functions for the cliques to obtain the joint probability distribution of the undirected graph. Finally, we compare the distances between the graph feature distributions of the distorted and reference images at different stages; thus, we combine the distortion distribution measurements derived from different graph model depths to determine the perceived quality of the distorted images. The empirical results obtained from an extensive array of experiments underscore the competitive nature of our proposed method, which achieves performance on par with that of the state-of-the-art methods, demonstrating its exceptional predictive accuracy and ability to maintain consistent and monotonic behaviour in image quality prediction tasks. The source code is publicly available at the following website https://github.com/Land5cape/GRIDS.
Collapse
|
2
|
Frackiewicz M, Palus H, Prandzioch D. Superpixel-Based PSO Algorithms for Color Image Quantization. SENSORS (BASEL, SWITZERLAND) 2023; 23:1108. [PMID: 36772145 PMCID: PMC9921601 DOI: 10.3390/s23031108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 01/13/2023] [Accepted: 01/14/2023] [Indexed: 06/18/2023]
Abstract
Nature-inspired artificial intelligence algorithms have been applied to color image quantization (CIQ) for some time. Among these algorithms, the particle swarm optimization algorithm (PSO-CIQ) and its numerous modifications are important in CIQ. In this article, the usefulness of such a modification, labeled IDE-PSO-CIQ and additionally using the idea of individual difference evolution based on the emotional states of particles, is tested. The superiority of this algorithm over the PSO-CIQ algorithm was demonstrated using a set of quality indices based on pixels, patches, and superpixels. Furthermore, both algorithms studied were applied to superpixel versions of quantized images, creating color palettes in much less time. A heuristic method was proposed to select the number of superpixels, depending on the size of the palette. The effectiveness of the proposed algorithms was experimentally verified on a set of benchmark color images. The results obtained from the computational experiments indicate a multiple reduction in computation time for the superpixel methods while maintaining the high quality of the output quantized images, slightly inferior to that obtained with the pixel methods.
Collapse
|
3
|
Frackiewicz M, Palus H. Efficient Color Quantization Using Superpixels. SENSORS (BASEL, SWITZERLAND) 2022; 22:6043. [PMID: 36015804 PMCID: PMC9416436 DOI: 10.3390/s22166043] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 08/02/2022] [Accepted: 08/10/2022] [Indexed: 06/15/2023]
Abstract
We propose three methods for the color quantization of superpixel images. Prior to the application of each method, the target image is first segmented into a finite number of superpixels by grouping the pixels that are similar in color. The color of a superpixel is given by the arithmetic mean of the colors of all constituent pixels. Following this, the superpixels are quantized using common splitting or clustering methods, such as median cut, k-means, and fuzzy c-means. In this manner, a color palette is generated while the original pixel image undergoes color mapping. The effectiveness of each proposed superpixel method is validated via experimentation using different color images. We compare the proposed methods with state-of-the-art color quantization methods. The results show significantly decreased computation time along with high quality of the quantized images. However, a multi-index evaluation process shows that the image quality is slightly worse than that obtained via pixel methods.
Collapse
|
4
|
Deep belief network for solving the image quality assessment in full reference and no reference model. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07649-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
5
|
Li A, Wu J, Tian S, Li L, Dong W, Shi G. Blind image quality assessment based on progressive multi-task learning. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.05.043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
6
|
Ling Y, Zhou F, Guo K, Xue JH. ASSP: An adaptive sample statistics-based pooling for full-reference image quality assessment. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.12.098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
7
|
Full-Reference Image Quality Assessment Based on Grünwald–Letnikov Derivative, Image Gradients, and Visual Saliency. ELECTRONICS 2022. [DOI: 10.3390/electronics11040559] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
The purpose of image quality assessment is to estimate digital images’ perceptual quality coherent with human judgement. Over the years, many structural features have been utilized or proposed to quantify the degradation of an image in the presence of various noise types. Image gradient is an obvious and very popular tool in the literature to quantify these changes in the images. However, gradient is able to characterize images locally. On the other hand, results from previous studies indicate that global contents of a scene are analyzed before the local features by the human visual system. Relying on these features of the human visual system, we propose a full-reference image quality assessment metric that characterizes the global changes of an image by the Grünwald–Letnikov derivatives and the local changes by image gradients. Moreover, visual saliency is also utilized for weighting the changes in the images and emphasizing those areas of the image which are salient to the human visual system. To prove the efficiency of the proposed method, massive experiments were carried out on publicly available benchmark image quality assessment databases.
Collapse
|
8
|
Hu Y, Zhang B, Zhang Y, Jiang C, Chen Z. A feature-level full-reference image denoising quality assessment method based on joint sparse representation. APPL INTELL 2022. [DOI: 10.1007/s10489-021-03052-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
9
|
Chin SC, Chow CO, Kanesan J, Chuah JH. A Study on Distortion Estimation Based on Image Gradients. SENSORS (BASEL, SWITZERLAND) 2022; 22:639. [PMID: 35062601 PMCID: PMC8779924 DOI: 10.3390/s22020639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 01/10/2022] [Accepted: 01/12/2022] [Indexed: 06/14/2023]
Abstract
Image noise is a variation of uneven pixel values that occurs randomly. A good estimation of image noise parameters is crucial in image noise modeling, image denoising, and image quality assessment. To the best of our knowledge, there is no single estimator that can predict all noise parameters for multiple noise types. The first contribution of our research was to design a noise data feature extractor that can effectively extract noise information from the image pair. The second contribution of our work leveraged other noise parameter estimation algorithms that can only predict one type of noise. Our proposed method, DE-G, can estimate additive noise, multiplicative noise, and impulsive noise from single-source images accurately. We also show the capability of the proposed method in estimating multiple corruptions.
Collapse
Affiliation(s)
| | - Chee-Onn Chow
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia; (S.C.C.); (J.K.); (J.H.C.)
| | | | | |
Collapse
|
10
|
Compressed Video Quality Index Based on Saliency-Aware Artifact Detection. SENSORS 2021; 21:s21196429. [PMID: 34640751 PMCID: PMC8512397 DOI: 10.3390/s21196429] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 09/20/2021] [Accepted: 09/22/2021] [Indexed: 11/16/2022]
Abstract
Video coding technology makes the required storage and transmission bandwidth of video services decrease by reducing the bitrate of the video stream. However, the compressed video signals may involve perceivable information loss, especially when the video is overcompressed. In such cases, the viewers can observe visually annoying artifacts, namely, Perceivable Encoding Artifacts (PEAs), which degrade their perceived video quality. To monitor and measure these PEAs (including blurring, blocking, ringing and color bleeding), we propose an objective video quality metric named Saliency-Aware Artifact Measurement (SAAM) without any reference information. The SAAM metric first introduces video saliency detection to extract interested regions and further splits these regions into a finite number of image patches. For each image patch, the data-driven model is utilized to evaluate intensities of PEAs. Finally, these intensities are fused into an overall metric using Support Vector Regression (SVR). In experiment section, we compared the SAAM metric with other popular video quality metrics on four publicly available databases: LIVE, CSIQ, IVP and FERIT-RTRK. The results reveal the promising quality prediction performance of the SAAM metric, which is superior to most of the popular compressed video quality evaluation models.
Collapse
|
11
|
Abstract
Objective image quality assessment (IQA) measures are playing an increasingly important role in the evaluation of digital image quality. New IQA indices are expected to be strongly correlated with subjective observer evaluations expressed by Mean Opinion Score (MOS) or Difference Mean Opinion Score (DMOS). One such recently proposed index is the SuperPixel-based SIMilarity (SPSIM) index, which uses superpixel patches instead of a rectangular pixel grid. The authors of this paper have proposed three modifications to the SPSIM index. For this purpose, the color space used by SPSIM was changed and the way SPSIM determines similarity maps was modified using methods derived from an algorithm for computing the Mean Deviation Similarity Index (MDSI). The third modification was a combination of the first two. These three new quality indices were used in the assessment process. The experimental results obtained for many color images from five image databases demonstrated the advantages of the proposed SPSIM modifications.
Collapse
|
12
|
Fang Y, Zeng Y, Jiang W, Zhu H, Yan J. Superpixel-Based Quality Assessment of Multi-Exposure Image Fusion for Both Static and Dynamic Scenes. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:2526-2537. [PMID: 33502981 DOI: 10.1109/tip.2021.3053465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Multi-exposure image fusion (MEF) algorithms have been used to merge a stack of low dynamic range images with various exposure levels into a well-perceived image. However, little work has been dedicated to predicting the visual quality of fused images. In this work, we propose a novel and efficient objective image quality assessment (IQA) model for MEF images of both static and dynamic scenes based on superpixels and an information theory adaptive pooling strategy. First, with the help of superpixels, we divide fused images into large- and small-changed regions using the structural inconsistency map between each exposure and fused images. Then, we compute the quality maps based on the Laplacian pyramid for large- and small-changed regions separately. Finally, an information theory induced adaptive pooling strategy is proposed to compute the perceptual quality of the fused image. Experimental results on three public databases of MEF images demonstrate the proposed model achieves promising performance and yields a relatively low computational complexity. Additionally, we also demonstrate the potential application for parameter tuning of MEF algorithms.
Collapse
|
13
|
No-reference stereoscopic image quality evaluator with segmented monocular features and perceptual binocular features. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.04.049] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
14
|
Zhou W, Jiang Q, Wang Y, Chen Z, Li W. Blind quality assessment for image superresolution using deep two-stream convolutional networks. Inf Sci (N Y) 2020. [DOI: 10.1016/j.ins.2020.04.030] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
15
|
Full-reference image quality metric for blurry images and compressed images using hybrid dictionary learning. Neural Comput Appl 2020. [DOI: 10.1007/s00521-019-04694-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
16
|
Zhang F, Zhang B, Zhang R, Zhang X. SPCM: Image quality assessment based on symmetry phase congruency. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2019.105987] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
17
|
Shanmugam A, Rukmani Devi S. Objective Edge Similarity Metric for denoising applications in MR images. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.01.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
18
|
Zhou F, Yao R, Liu B, Qiu G. Visual Quality Assessment for Super-Resolved Images: Database and Method. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:3528-3541. [PMID: 30762547 DOI: 10.1109/tip.2019.2898638] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Image super-resolution (SR) has been an active research problem which has recently received renewed interest due to the introduction of new technologies such as deep learning. However, the lack of suitable criteria to evaluate the SR performance has hindered technology development. In this paper, we fill a gap in the literature by providing the first publicly available database as well as a new image quality assessment (IQA) method specifically designed for assessing the visual quality of super-resolved images (SRIs). In constructing the quality assessment database for SRIs (QADS), we carefully selected 20 reference images and created 980 SRIs using 21 image SR methods. Mean opinion score (MOS) for these SRIs is collected through 100 individuals participating in a suitably designed psychovisual experiment. Extensive numerical and statistical analysis is performed to show that the MOS of QADS has excellent suitability and reliability. The psychovisual experiment has led to the discovery that, unlike distortions encountered in other IQA databases, artifacts of the SRIs degenerate the image structure as well as the image texture. Moreover, the structural and textural degenerations have distinctive perceptual properties. Based on these insights, we propose a novel method to assess the visual quality of SRIs by separately considering the structural and textural components of images. Observing that textural degenerations are mainly attributed to dissimilar texture or checkerboard artifacts, we propose to measure the changes of textural distributions. We also observe that structural degenerations appear as blurring and jaggies artifacts in SRIs and develop separate similarity measures for different types of structural degenerations. A new pooling mechanism is then used to fuse the different similarities together to give the final quality score for an SRI. The experiments conducted on the QADS demonstrate that our method significantly outperforms the classical as well as current state-of-the-art IQA methods.
Collapse
|