1
|
Padhiary M, Barbhuiya JA, Roy D, Roy P. 3D printing applications in smart farming and food processing. SMART AGRICULTURAL TECHNOLOGY 2024; 9:100553. [DOI: 10.1016/j.atech.2024.100553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
|
2
|
Altamimi A, Ben Youssef B. Lossless and Near-Lossless Compression Algorithms for Remotely Sensed Hyperspectral Images. ENTROPY (BASEL, SWITZERLAND) 2024; 26:316. [PMID: 38667870 PMCID: PMC11048921 DOI: 10.3390/e26040316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 03/27/2024] [Accepted: 03/31/2024] [Indexed: 04/28/2024]
Abstract
Rapid and continuous advancements in remote sensing technology have resulted in finer resolutions and higher acquisition rates of hyperspectral images (HSIs). These developments have triggered a need for new processing techniques brought about by the confined power and constrained hardware resources aboard satellites. This article proposes two novel lossless and near-lossless compression methods, employing our recent seed generation and quadrature-based square rooting algorithms, respectively. The main advantage of the former method lies in its acceptable complexity utilizing simple arithmetic operations, making it suitable for real-time onboard compression. In addition, this near-lossless compressor could be incorporated for hard-to-compress images offering a stabilized reduction at nearly 40% with a maximum relative error of 0.33 and a maximum absolute error of 30. Our results also show that a lossless compression performance, in terms of compression ratio, of up to 2.6 is achieved when testing with hyperspectral images from the Corpus dataset. Further, an improvement in the compression rate over the state-of-the-art k2-raster technique is realized for most of these HSIs by all four variations of our proposed lossless compression method. In particular, a data reduction enhancement of up to 29.89% is realized when comparing their respective geometric mean values.
Collapse
Affiliation(s)
- Amal Altamimi
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia;
- Space Technologies Institute, King Abdulaziz City for Science and Technology, P.O. Box 8612, Riyadh 12354, Saudi Arabia
| | - Belgacem Ben Youssef
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia;
| |
Collapse
|
3
|
Latha HR, Ramaprasath A. HWCD: A hybrid approach for image compression using wavelet, encryption using confusion, and decryption using diffusion scheme. JOURNAL OF INTELLIGENT SYSTEMS 2023. [DOI: 10.1515/jisys-2022-9056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023] Open
Abstract
Abstract
Image data play important role in various real-time online and offline applications. Biomedical field has adopted the imaging system to detect, diagnose, and prevent several types of diseases and abnormalities. The biomedical imaging data contain huge information which requires huge storage space. Moreover, currently telemedicine and IoT based remote health monitoring systems are widely developed where data is transmitted from one place to another. Transmission of this type of huge data consumes more bandwidth. Along with this, during this transmission, the attackers can attack the communication channel and obtain the important and secret information. Hence, biomedical image compression and encryption are considered the solution to deal with these issues. Several techniques have been presented but achieving desired performance for combined module is a challenging task. Hence, in this work, a novel combined approach for image compression and encryption is developed. First, image compression scheme using wavelet transform is presented and later a cryptography scheme is presented using confusion and diffusion schemes. The outcome of the proposed approach is compared with various existing techniques. The experimental analysis shows that the proposed approach achieves better performance in terms of autocorrelation, histogram, information entropy, PSNR, MSE, and SSIM.
Collapse
Affiliation(s)
| | - Alagarswamy Ramaprasath
- Department of Computer Applications, Hindustan Institute of Technology and Science , Chennai , India
| |
Collapse
|
4
|
Lin H, Tse R, Tang SK, Qiang Z, Pau G. Few-Shot Learning for Plant-Disease Recognition in the Frequency Domain. PLANTS (BASEL, SWITZERLAND) 2022; 11:2814. [PMID: 36365267 PMCID: PMC9657239 DOI: 10.3390/plants11212814] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 08/26/2022] [Accepted: 10/15/2022] [Indexed: 06/16/2023]
Abstract
Few-shot learning (FSL) is suitable for plant-disease recognition due to the shortage of data. However, the limitations of feature representation and the demanding generalization requirements are still pressing issues that need to be addressed. The recent studies reveal that the frequency representation contains rich patterns for image understanding. Given that most existing studies based on image classification have been conducted in the spatial domain, we introduce frequency representation into the FSL paradigm for plant-disease recognition. A discrete cosine transform module is designed for converting RGB color images to the frequency domain, and a learning-based frequency selection method is proposed to select informative frequencies. As a post-processing of feature vectors, a Gaussian-like calibration module is proposed to improve the generalization by aligning a skewed distribution with a Gaussian-like distribution. The two modules can be independent components ported to other networks. Extensive experiments are carried out to explore the configurations of the two modules. Our results show that the performance is much better in the frequency domain than in the spatial domain, and the Gaussian-like calibrator further improves the performance. The disease identification of the same plant and the cross-domain problem, which are critical to bring FSL to agricultural industry, are the research directions in the future.
Collapse
Affiliation(s)
- Hong Lin
- Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR 999078, China
| | - Rita Tse
- Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR 999078, China
| | - Su-Kit Tang
- Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR 999078, China
| | - Zhenping Qiang
- College of Big Data and Intelligent Engineering, Southwest Forestry University, Kunming 650224, China
| | - Giovanni Pau
- Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR 999078, China
- Department of Computer Science and Engineering, University of Bologna, 40126 Bologna, Italy
- Samueli Computer Science Department, University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
5
|
Dai L, Zhang L, Li H. Image Compression Using Stochastic-AFD Based Multisignal Sparse Representation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:5317-5331. [PMID: 35921349 DOI: 10.1109/tip.2022.3194696] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Adaptive Fourier decomposition (AFD) is a newly developed signal processing tool that can adaptively decompose any single signal using a Szegö kernel dictionary. To process multiple signals, a novel stochastic-AFD (SAFD) theory was recently proposed. The innovation of this study is twofold. First, a SAFD-based general multi-signal sparse representation learning algorithm is designed and implemented for the first time in the literature, which can be used in many signal and image processing areas. Second, a novel SAFD based image compression framework is proposed. The algorithm design and implementation of the SAFD theory and image compression methods are presented in detail. The proposed compression methods are compared with 13 other state-of-the-art compression methods, including JPEG, JPEG2000, BPG, and other popular deep learning-based methods. The experimental results show that our methods achieve the best balanced performance. The proposed methods are based on single image adaptive sparse representation learning, and they require no pre-training. In addition, the decompression quality or compression efficiency can be easily adjusted by a single parameter, that is, the decomposition level. Our method is supported by a solid mathematical foundation, which has the potential to become a new core technology in image compression.
Collapse
|
6
|
Anju MI, Mohan J. DWT Lifting Scheme for Image Compression with Cordic-Enhanced Operation. INT J PATTERN RECOGN 2022. [DOI: 10.1142/s0218001422540064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper proposes an innovative image compression scheme by utilizing the Adaptive Discrete Wavelet Transform-based Lifting Scheme (ADWT-LS). The most important feature of the proposed DWT lifting method is splitting the low-pass and high-pass filters into upper and lower triangular matrices. It also converts the filter execution into banded matrix multiplications with an innovative lifting factorization presented with fine-tuned parameters. Further, optimal tuning is the most important contribution that is achieved via a new hybrid algorithm known as Lioness-Integrated Whale Optimization Algorithm (LI-WOA). The proposed algorithm hybridizes the concepts of both the Lion Algorithm (LA) and Whale Optimization Algorithm (WOA). In addition, innovative cosine evaluation is initiated in this work under the CORDIC algorithm. Also, this paper defines a single objective function that relates multi-constraints like the Peak Signal-to-Noise Ratio (PSNR) as well as Compression Ratio (CR). Finally, the performance of the proposed work is compared over other conventional models regarding certain performance measures.
Collapse
Affiliation(s)
- M. I. Anju
- Department of Electronics and Communication Engineering, Anna University, Guindy, Chennai, Tamilnadu, 600025, India
| | - J. Mohan
- Department of Electronics and Communication Engineering, SRM Valliammai Engineering College, Kattankulathur, Tamilnadu, 603203, India
| |
Collapse
|
7
|
Nagoor OH, Whittle J, Deng J, Mora B, Jones MW. Sampling strategies for learning-based 3D medical image compression. MACHINE LEARNING WITH APPLICATIONS 2022. [DOI: 10.1016/j.mlwa.2022.100273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
|
8
|
Thomos N, Maugey T, Toni L. Machine Learning for Multimedia Communications. SENSORS (BASEL, SWITZERLAND) 2022; 22:819. [PMID: 35161566 PMCID: PMC8840624 DOI: 10.3390/s22030819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 01/12/2022] [Accepted: 01/14/2022] [Indexed: 11/17/2022]
Abstract
Machine learning is revolutionizing the way multimedia information is processed and transmitted to users. After intensive and powerful training, some impressive efficiency/accuracy improvements have been made all over the transmission pipeline. For example, the high model capacity of the learning-based architectures enables us to accurately model the image and video behavior such that tremendous compression gains can be achieved. Similarly, error concealment, streaming strategy or even user perception modeling have widely benefited from the recent learning-oriented developments. However, learning-based algorithms often imply drastic changes to the way data are represented or consumed, meaning that the overall pipeline can be affected even though a subpart of it is optimized. In this paper, we review the recent major advances that have been proposed all across the transmission chain, and we discuss their potential impact and the research challenges that they raise.
Collapse
Affiliation(s)
- Nikolaos Thomos
- School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, UK
| | | | - Laura Toni
- Department of Electrical & Electrical Engineering, University College London (UCL), London WC1E 6AE, UK;
| |
Collapse
|
9
|
A Systematic Review of Hardware-Accelerated Compression of Remotely Sensed Hyperspectral Images. SENSORS 2021; 22:s22010263. [PMID: 35009804 PMCID: PMC8749878 DOI: 10.3390/s22010263] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 12/28/2021] [Accepted: 12/29/2021] [Indexed: 12/27/2022]
Abstract
Hyperspectral imaging is an indispensable technology for many remote sensing applications, yet expensive in terms of computing resources. It requires significant processing power and large storage due to the immense size of hyperspectral data, especially in the aftermath of the recent advancements in sensor technology. Issues pertaining to bandwidth limitation also arise when seeking to transfer such data from airborne satellites to ground stations for postprocessing. This is particularly crucial for small satellite applications where the platform is confined to limited power, weight, and storage capacity. The availability of onboard data compression would help alleviate the impact of these issues while preserving the information contained in the hyperspectral image. We present herein a systematic review of hardware-accelerated compression of hyperspectral images targeting remote sensing applications. We reviewed a total of 101 papers published from 2000 to 2021. We present a comparative performance analysis of the synthesized results with an emphasis on metrics like power requirement, throughput, and compression ratio. Furthermore, we rank the best algorithms based on efficiency and elaborate on the major factors impacting the performance of hardware-accelerated compression. We conclude by highlighting some of the research gaps in the literature and recommend potential areas of future research.
Collapse
|
10
|
Zahra A, Ghafoor M, Munir K, Ullah A, Ul Abideen Z. Application of region-based video surveillance in smart cities using deep learning. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 83:1-26. [PMID: 34975282 PMCID: PMC8710820 DOI: 10.1007/s11042-021-11468-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Revised: 05/23/2021] [Accepted: 08/19/2021] [Indexed: 06/14/2023]
Abstract
Smart video surveillance helps to build more robust smart city environment. The varied angle cameras act as smart sensors and collect visual data from smart city environment and transmit it for further visual analysis. The transmitted visual data is required to be in high quality for efficient analysis which is a challenging task while transmitting videos on low capacity bandwidth communication channels. In latest smart surveillance cameras, high quality of video transmission is maintained through various video encoding techniques such as high efficiency video coding. However, these video coding techniques still provide limited capabilities and the demand of high-quality based encoding for salient regions such as pedestrians, vehicles, cyclist/motorcyclist and road in video surveillance systems is still not met. This work is a contribution towards building an efficient salient region-based surveillance framework for smart cities. The proposed framework integrates a deep learning-based video surveillance technique that extracts salient regions from a video frame without information loss, and then encodes it in reduced size. We have applied this approach in diverse case studies environments of smart city to test the applicability of the framework. The successful result in terms of bitrate 56.92%, peak signal to noise ratio 5.35 bd and SR based segmentation accuracy of 92% and 96% for two different benchmark datasets is the outcome of proposed work. Consequently, the generation of less computational region-based video data makes it adaptable to improve surveillance solution in Smart Cities.
Collapse
Affiliation(s)
- Asma Zahra
- Department of Computer Science, COMSATS University Islamabad, Islamabad, Pakistan
- Department of Computer Science, National University of Modern Languages, Islamabad, Pakistan
| | - Mubeen Ghafoor
- School of Computer Science, University of Lincoln, Lincoln, UK
| | - Kamran Munir
- Department of Computer Science and Creative Technologies (CSCT), University of the West of England (UWE), Bristol, UK
| | - Ata Ullah
- Department of Computer Science, National University of Modern Languages, Islamabad, Pakistan
| | - Zain Ul Abideen
- Department of Computer Science, National University of Modern Languages, Islamabad, Pakistan
| |
Collapse
|
11
|
Impact of Image Compression on the Performance of Steel Surface Defect Classification with a CNN. JOURNAL OF SENSOR AND ACTUATOR NETWORKS 2021. [DOI: 10.3390/jsan10040073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Machine vision is increasingly replacing manual steel surface inspection. The automatic inspection of steel surface defects makes it possible to ensure the quality of products in the steel industry with high accuracy. However, the optimization of inspection time presents a great challenge for the integration of machine vision in high-speed production lines. In this context, compressing the collected images before transmission is essential to save bandwidth and energy, and improve the latency of vision applications. The aim of this paper was to study the impact of quality degradation resulting from image compression on the classification performance of steel surface defects with a CNN. Image compression was applied to the Northeastern University (NEU) surface-defect database with various compression ratios. Three different models were trained and tested with these images to classify surface defects using three different approaches. The obtained results showed that trained and tested models on the same compression qualities maintained approximately the same classification performance for all used compression grades. In addition, the findings clearly indicated that the classification efficiency was affected when the training and test datasets were compressed using different parameters. This impact was more obvious when there was a large difference between these compression parameters, and for models that achieved very high accuracy. Finally, it was found that compression-based data augmentation significantly increased the classification precision to perfect scores (98–100%), and thus improved the generalization of models when tested on different compression qualities. The importance of this work lies in exploiting the obtained results to successfully integrate image compression into machine vision systems, and as appropriately as possible.
Collapse
|
12
|
Anju MI, Mohan J. Deep image compression with lifting scheme: Wavelet transform domain based on high‐frequency subband prediction. INT J INTELL SYST 2021. [DOI: 10.1002/int.22769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- M. I. Anju
- Research Scholar, Department of Electronics and Communication Engineering Anna University Chennai Tamil Nadu India
| | - J. Mohan
- Department of Electronics and Communication Engineering SRM Valliammai Engineering College Chennai Kattankulathur Tamil Nadu India
| |
Collapse
|
13
|
Yamagiwa S, Ichinomiya Y. Stream-Based Visually Lossless Data Compression Applying Variable Bit-Length ADPCM Encoding. SENSORS 2021; 21:s21134602. [PMID: 34283137 PMCID: PMC8271783 DOI: 10.3390/s21134602] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 06/24/2021] [Accepted: 06/28/2021] [Indexed: 12/04/2022]
Abstract
Video applications have become one of the major services in the engineering field, which are implemented by server–client systems connected via the Internet, broadcasting services for mobile devices such as smartphones and surveillance cameras for security. Recently, the majority of video encoding mechanisms to reduce the data rate are mainly lossy compression methods such as the MPEG format. However, when we consider special needs for high-speed communication such as display applications and object detection ones with high accuracy from the video stream, we need to address the encoding mechanism without any loss of pixel information, called visually lossless compression. This paper focuses on the Adaptive Differential Pulse Code Modulation (ADPCM) that encodes a data stream into a constant bit length per data element. However, the conventional ADPCM does not have any mechanism to control dynamically the encoding bit length. We propose a novel ADPCM that provides a mechanism with a variable bit-length control, called ADPCM-VBL, for the encoding/decoding mechanism. Furthermore, since we expect that the encoded data from ADPCM maintains low entropy, we expect to reduce the amount of data by applying a lossless data compression. Applying ADPCM-VBL and a lossless data compression, this paper proposes a video transfer system that controls throughput autonomously in the communication data path. Through evaluations focusing on the aspects of the encoding performance and the image quality, we confirm that the proposed mechanisms effectively work on the applications that needs visually lossless compression by encoding video stream in low latency.
Collapse
Affiliation(s)
- Shinichi Yamagiwa
- Faculty of Engineering, Information and Systems, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573, Japan
- JST, PRESTO, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012, Japan
- Correspondence:
| | - Yuma Ichinomiya
- Department of Computer Science, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573, Japan;
| |
Collapse
|
14
|
Pourasad Y, Cavallaro F. A Novel Image Processing Approach to Enhancement and Compression of X-ray Images. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18136724. [PMID: 34206486 PMCID: PMC8297375 DOI: 10.3390/ijerph18136724] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 06/17/2021] [Accepted: 06/19/2021] [Indexed: 11/28/2022]
Abstract
At present, there is an increase in the capacity of data generated and stored in the medical area. Thus, for the efficient handling of these extensive data, the compression methods need to be re-explored by considering the algorithm’s complexity. To reduce the redundancy of the contents of the image, thus increasing the ability to store or transfer information in optimal form, an image processing approach needs to be considered. So, in this study, two compression techniques, namely lossless compression and lossy compression, were applied for image compression, which preserves the image quality. Moreover, some enhancing techniques to increase the quality of a compressed image were employed. These methods were investigated, and several comparison results are demonstrated. Finally, the performance metrics were extracted and analyzed based on state-of-the-art methods. PSNR, MSE, and SSIM are three performance metrics that were used for the sample medical images. Detailed analysis of the measurement metrics demonstrates better efficiency than the other image processing techniques. This study helps to better understand these strategies and assists researchers in selecting a more appropriate technique for a given use case.
Collapse
Affiliation(s)
- Yaghoub Pourasad
- Department of Electrical Engineering, Urmia University of Technology, Urmia 17165-57166, Iran
- Correspondence:
| | - Fausto Cavallaro
- Department of Economics, University of Molise, Via De Sanctis, 86100 Campobasso, Italy;
| |
Collapse
|
15
|
Xin G, Fan P. A lossless compression method for multi-component medical images based on big data mining. Sci Rep 2021; 11:12372. [PMID: 34117350 PMCID: PMC8196061 DOI: 10.1038/s41598-021-91920-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 06/02/2021] [Indexed: 11/08/2022] Open
Abstract
In disease diagnosis, medical image plays an important part. Its lossless compression is pretty critical, which directly determines the requirement of local storage space and communication bandwidth of remote medical systems, so as to help the diagnosis and treatment of patients. There are two extraordinary properties related to medical images: lossless and similarity. How to take advantage of these two properties to reduce the information needed to represent an image is the key point of compression. In this paper, we employ the big data mining to set up the image codebook. That is, to find the basic components of images. We propose a soft compression algorithm for multi-component medical images, which can exactly reflect the fundamental structure of images. A general representation framework for image compression is also put forward and the results indicate that our developed soft compression algorithm can outperform the popular benchmarks PNG and JPEG2000 in terms of compression ratio.
Collapse
Affiliation(s)
- Gangtao Xin
- The Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Pingyi Fan
- The Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
16
|
Gandam A, Sidhu JS, Verma S, Jhanjhi NZ, Nayyar A, Abouhawwash M, Nam Y. An efficient post-processing adaptive filtering technique to rectifying the flickering effects. PLoS One 2021; 16:e0250959. [PMID: 33970949 PMCID: PMC8109823 DOI: 10.1371/journal.pone.0250959] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Accepted: 04/19/2021] [Indexed: 11/25/2022] Open
Abstract
Compression at a very low bit rate(≤0.5bpp) causes degradation in video frames with standard decoding algorithms like H.261, H.262, H.264, and MPEG-1 and MPEG-4, which itself produces lots of artifacts. This paper focuses on an efficient pre-and post-processing technique (PP-AFT) to address and rectify the problems of quantization error, ringing, blocking artifact, and flickering effect, which significantly degrade the visual quality of video frames. The PP-AFT method differentiates the blocked images or frames using activity function into different regions and developed adaptive filters as per the classified region. The designed process also introduces an adaptive flicker extraction and removal method and a 2-D filter to remove ringing effects in edge regions. The PP-AFT technique is implemented on various videos, and results are compared with different existing techniques using performance metrics like PSNR-B, MSSIM, and GBIM. Simulation results show significant improvement in the subjective quality of different video frames. The proposed method outperforms state-of-the-art de-blocking methods in terms of PSNR-B with average value lying between (0.7-1.9db) while (35.83-47.7%) reduced average GBIM keeping MSSIM values very close to the original sequence statistically 0.978.
Collapse
Affiliation(s)
- Anudeep Gandam
- Department of Electronics and Communication Engineering, IKG-Punjab Technical University Jalandhar, Punjab, India
| | - Jagroop Singh Sidhu
- Department of Electronics and Communication Engineering, DAVIET Jalandhar, Punjab, India
| | - Sahil Verma
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab, India
| | - N. Z. Jhanjhi
- School of Computer Science and Engineering, SCE Taylor’s University, Subang Jaya, Malaysia
| | - Anand Nayyar
- Graduate School, Duy Tan University, Da Nang, Viet Nam
- Faculty of Information Technology, Duy Tan University, Da Nang, Viet Nam
| | - Mohamed Abouhawwash
- Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, Egypt
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan, United States of America
| | - Yunyoung Nam
- Department of Computer Science and Engineering, Soonchunhyang University, Asan, Korea
| |
Collapse
|
17
|
Qualitative Rating of Lossy Compression for Aerial Imagery by Neutrosophic WASPAS Method. Symmetry (Basel) 2021. [DOI: 10.3390/sym13020273] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The monitoring and management of consistently changing landscape patterns are accomplished through a large amount of remote sensing data using satellite images and aerial photography that requires lossy compression for effective storage and transmission. Lossy compression brings the necessity to evaluate the image quality to preserve the important and detailed visual features of the data. We proposed and verified a weighted combination of qualitative parameters for the multi-criteria decision-making (MCDM) framework to evaluate the quality of the compressed aerial images. The aerial imagery of different contents and resolutions was tested using the transform-based lossy compression algorithms. We formulated an MCDM problem dedicated to the rating of lossy compression algorithms, governed by the set of qualitative parameters of the images and visually acceptable lossy compression ratios. We performed the lossy compression algorithms’ ranking with different compression ratios by their suitability for the aerial images using the neutrosophic weighted aggregated sum product assessment (WASPAS) method. The novelty of our methodology is the use of a weighted combination of different qualitative parameters for lossy compression estimation to get a more precise evaluation of the effect of lossy compression on the image content. Our methodology includes means of solving different subtasks, either by altering the weights or the set of aspects.
Collapse
|
18
|
Abstract
A great deal of information is produced daily, due to advances in telecommunication, and the issue of storing it on digital devices or transmitting it over the Internet is challenging. Data compression is essential in managing this information well. Therefore, research on data compression has become a topic of great interest to researchers, and the number of applications in this area is increasing. Over the last few decades, international organisations have developed many strategies for data compression, and there is no specific algorithm that works well on all types of data. The compression ratio, as well as encoding and decoding times, are mainly used to evaluate an algorithm for lossless image compression. However, although the compression ratio is more significant for some applications, others may require higher encoding or decoding speeds or both; alternatively, all three parameters may be equally important. The main aim of this article is to analyse the most advanced lossless image compression algorithms from each point of view, and evaluate the strength of each algorithm for each kind of image. We develop a technique regarding how to evaluate an image compression algorithm that is based on more than one parameter. The findings that are presented in this paper may be helpful to new researchers and to users in this area.
Collapse
|
19
|
Intelligent Machine Vision Model for Defective Product Inspection Based on Machine Learning. JOURNAL OF SENSOR AND ACTUATOR NETWORKS 2021. [DOI: 10.3390/jsan10010007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Quality control is one of the industrial tasks most susceptible to be improved by implementing technological innovations. As an innovative technology, machine vision enables reliable and fast 24/7 inspections and helps producers to improve the efficiency of manufacturing operations. The accessible data by vision equipment will be used to identify and report defective products, understand the causes of deficiencies and allow rapid and efficient intervention in smart factories. From this perspective, the proposed machine vision model in this paper combines the identification of defective products and the continuous improvement of manufacturing processes by predicting the most suitable parameters of production processes to obtain a defect-free item. The suggested model exploits all generated data by various integrated technologies in the manufacturing chain, thus meeting the requirements of quality management in the context of Industry 4.0, based on predictive analysis to identify patterns in data and suggest corrective actions to ensure product quality. In addition, a comparative study between several machine learning algorithms, both for product classification and process improvement models, is performed in order to evaluate the designed system. The results of this study show that the proposed model largely meets the requirements for the proper implementation of these techniques.
Collapse
|
20
|
Highly Efficient Lossless Coding for High Dynamic Range Red, Clear, Clear, Clear Image Sensors. SENSORS 2021; 21:s21020653. [PMID: 33477807 PMCID: PMC7832868 DOI: 10.3390/s21020653] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Revised: 12/09/2020] [Accepted: 01/16/2021] [Indexed: 11/30/2022]
Abstract
In this paper we present a highly efficient coding procedure, specially designed and dedicated to operate with high dynamic range (HDR) RCCC (red, clear, clear, clear) image sensors used mainly in advanced driver-assistance systems (ADAS) and autonomous driving systems (ADS). The coding procedure can be used for a lossless reduction of data volume under developing and testing of video processing algorithms, e.g., in software in-the-loop (SiL) or hardware in-the-loop (HiL) conditions. Therefore, it was designed to achieve both: the state-of-the-art compression ratios and real-time compression feasibility. In tests we utilized FFV1 lossless codec and proved efficiency of up to 81 fps (frames per second) for compression and 87 fps for decompression performed on a single Intel i7 CPU.
Collapse
|
21
|
Reliability Analysis of the SHyLoC CCSDS123 IP Core for Lossless Hyperspectral Image Compression Using COTS FPGAs. ELECTRONICS 2020. [DOI: 10.3390/electronics9101681] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Hyperspectral images can comprise hundreds of spectral bands, which means that they can represent a large volume of data difficult to manage with the available on-board resources. Lossless compression solutions are interesting for reducing the amount of information stored or transmitted while preserving it at the same time. The Hyperspectral Lossless Compressor for space applications (SHyLoC), which is part of the European Space Agency (ESA) IP core’s library, has been demonstrated to meet the requirements of space missions in terms of compression efficiency, low complexity and high throughput. Currently, there is a trend to use Commercial Off-The-Shelf (COTS) on-board electronic devices on small satellites. Moreover, commercial Field-Programmable Gate Arrays (FPGAs) have been used in a number of them. Hence, a reliability analysis is required to ensure the robustness of the applications to Single Event Upsets (SEUs) in the configuration memory. In this work, we present a reliability analysis of this hyperspectral image compression module as a first step towards the development of ad-hoc fault-tolerant protection techniques for the SHyLoC IP core. The reliability analysis is performed using a fault-injection-based experimental set-up in which a hardware implementation of the Consultative Committee for Space Data Systems (CCSDS) 123.0-B-1 lossless compression standard is tested against configuration memory errors in a Xilinx Zynq XC7Z020 System-on-Chip. The results obtained for unhardened and redundancy-based protected versions of the module are put into perspective in terms of area/power consumption and availability/protection coverage gained to provide insight into the development of more efficient knowledge-based protection schemes.
Collapse
|
22
|
Towards the Concurrent Execution of Multiple Hyperspectral Imaging Applications by Means of Computationally Simple Operations. REMOTE SENSING 2020. [DOI: 10.3390/rs12081343] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The on-board processing of remotely sensed hyperspectral images is gaining momentum for applications that demand a quick response as an alternative to conventional approaches where the acquired images are off-line processed once they have been transmitted to the ground segment. However, the adoption of this on-board processing strategy brings further challenges for the remote-sensing research community due to the high data rate of the new-generation hyperspectral sensors and the limited amount of available on-board computational resources. This situation becomes even more stringent when different time-sensitive applications coexist, since different tasks must be sequentially processed onto the same computing device. In this work, we have dealt with this issue through the definition of a set of core operations that extracts spectral features useful for many hyperspectral analysis techniques, such as unmixing, compression and target/anomaly detection. Accordingly, it permits the concurrent execution of such techniques reusing operations and thereby requiring much less computational resources than if they were separately executed. In particular, in this manuscript we have verified the goodness of our proposal for the concurrent execution of both the lossy compression and anomaly detection processes in hyperspectral images. To evaluate the performance, several images taken by an unmanned aerial vehicle have been used. The obtained results clearly support the benefits of our proposal not only in terms of accuracy but also in terms of computational burden, achieving a reduction of roughly 50% fewer operations to be executed. Future research lines are focused on extending this methodology to other fields such as target detection, classification and dimensionality reduction.
Collapse
|
23
|
Abstract
Modern daily life activities result in a huge amount of data, which creates a big challenge for storing and communicating them. As an example, hospitals produce a huge amount of data on a daily basis, which makes a big challenge to store it in a limited storage or to communicate them through the restricted bandwidth over the Internet. Therefore, there is an increasing demand for more research in data compression and communication theory to deal with such challenges. Such research responds to the requirements of data transmission at high speed over networks. In this paper, we focus on deep analysis of the most common techniques in image compression. We present a detailed analysis of run-length, entropy and dictionary based lossless image compression algorithms with a common numeric example for a clear comparison. Following that, the state-of-the-art techniques are discussed based on some bench-marked images. Finally, we use standard metrics such as average code length (ACL), compression ratio (CR), pick signal-to-noise ratio (PSNR), efficiency, encoding time (ET) and decoding time (DT) in order to measure the performance of the state-of-the-art techniques.
Collapse
|
24
|
Mehmood A, Chaudhary NI, Zameer A, Raja MAZ. Backtracking search optimization heuristics for nonlinear Hammerstein controlled auto regressive auto regressive systems. ISA TRANSACTIONS 2019; 91:99-113. [PMID: 30770155 DOI: 10.1016/j.isatra.2019.01.042] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Revised: 12/13/2018] [Accepted: 01/31/2019] [Indexed: 06/09/2023]
Abstract
In this work, novel application of evolutionary computational heuristics is presented for parameter identification problem of nonlinear Hammerstein controlled auto regressive auto regressive (NHCARAR) systems through global search competency of backtracking search algorithm (BSA), differential evolution (DE) and genetic algorithms (GAs). The mean squared error metric is used for the fitness function of NHCARAR system based on difference between actual and approximated design variables. Optimization of the cost function is conducted with BSA for NHCARAR model by varying degrees of freedom and noise variances. To verify and validate the worth of the presented scheme, comparative studies are carried out with its counterparts DE and GAs through statistical observations by means of weight deviation factor, root of mean squared error, and Thiel's inequality coefficient as well as complexity measures.
Collapse
Affiliation(s)
- Ammara Mehmood
- Department of Electrical Engineering, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad, Pakistan.
| | | | - Aneela Zameer
- Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad, Pakistan.
| | - Muhammad Asif Zahoor Raja
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Attock Campus, Attock, Pakistan.
| |
Collapse
|
25
|
Dhou K. An innovative design of a hybrid chain coding algorithm for bi-level image compression using an agent-based modeling approach. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.03.024] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
26
|
An Efficient Encoding Algorithm Using Local Path on Huffman Encoding Algorithm for Compression. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9040782] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Keywords: encoding algorithm; compression; Huffman encoding algorithm; arithmetic coding algorithm
Collapse
|