1
|
scDM: A deep generative method for cell surface protein prediction with diffusion model. J Mol Biol 2024; 436:168610. [PMID: 38754773 DOI: 10.1016/j.jmb.2024.168610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 05/06/2024] [Accepted: 05/09/2024] [Indexed: 05/18/2024]
Abstract
The executors of organismal functions are proteins, and the transition from RNA to protein is subject to post-transcriptional regulation; therefore, considering both RNA and surface protein expression simultaneously can provide additional evidence of biological processes. Cellular indexing of transcriptomes and epitopes by sequencing (CITE-Seq) technology can measure both RNA and protein expression in single cells, but these experiments are expensive and time-consuming. Due to the lack of computational tools for predicting surface proteins, we used datasets obtained with CITE-seq technology to design a deep generative prediction method based on diffusion models and to find biological discoveries through the prediction results. In our method, the scDM, which predicts protein expression values from RNA expression values of individual cells, uses a novel way of encoding the data into a model and generates predicted samples by introducing Gaussian noise to gradually remove the noise to learn the data distribution during the modelling process. Comprehensive evaluation across different datasets demonstrated that our predictions yielded satisfactory results and further demonstrated the effectiveness of incorporating information from single-cell multiomics data into diffusion models for biological studies. We also found that new directions for discovering therapeutic drug targets could be provided by jointly analysing the predictive value of surface protein expression and cancer cell drug scores.
Collapse
|
2
|
Application of Multiple-Optimization Filtering Algorithm in Remote Sensing Image Denoising. SENSORS (BASEL, SWITZERLAND) 2023; 23:7813. [PMID: 37765870 PMCID: PMC10535474 DOI: 10.3390/s23187813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 08/30/2023] [Accepted: 09/05/2023] [Indexed: 09/29/2023]
Abstract
Denoising remote sensing images is crucial in the application and research of remote sensing imagery. Noise in remote sensing images originates from sensor characteristics, signal transmission, and environmental conditions, among which Gaussian noise is the most common type. In this paper, we proposed a multiple-optimization bilateral filtering (MOBF) algorithm based on edge detection and differential evolution (DE) methods. The proposed algorithm optimizes the spatial domain filtering kernel and the spatial domain Gaussian kernel by using the standard deviation and width of the edge response. By employing the DE algorithm, the individuals in the population based on the standard deviation of the gray value domain are subjected to iterative mutation, crossover, and selection operations to refine the latent solution vectors and determine the optimal color space for optimizing the standard deviation of the pixel range domain kernel. As a result, the MOBF algorithm, which does not require any parameter input, is realized. To verify the feasibility and effectiveness of the proposed algorithm, denoising experiments were conducted on remote sensing images by using evaluation metrics such as the mean squared error, peak signal-to-noise ratio, and structural similarity index. The experimental results revealed that the MOBF algorithm outperforms traditional algorithms for all three evaluation metrics.
Collapse
|
3
|
Denoising of Nifti (MRI) Images with a Regularized Neighborhood Pixel Similarity Wavelet Algorithm. SENSORS (BASEL, SWITZERLAND) 2023; 23:7780. [PMID: 37765837 PMCID: PMC10536345 DOI: 10.3390/s23187780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 07/26/2023] [Accepted: 09/04/2023] [Indexed: 09/29/2023]
Abstract
The recovery of semantics from corrupted images is a significant challenge in image processing. Noise can obscure features, interfere with accurate analysis, and bias results. To address this issue, the Regularized Neighborhood Pixel Similarity Wavelet algorithm (PixSimWave) was developed for denoising Nifti (magnetic resonance imaging (MRI)). The PixSimWave algorithm uses regularized pixel similarity detection to improve the accuracy of noise reduction by creating patches to analyze the intensity of pixels and locate matching pixels, as well as adaptive neighborhood filtering to estimate noisy pixel values by allocating each pixel a weight based on its similarity. The wavelet transform breaks down the image into scales and orientations, allowing a sparse image representation to allocate a soft threshold on its similarity to the original pixels. The proposed method was evaluated on simulated and raw T1w MRIs, outperforming other methods in terms of an SSIM value of 0.9908 for a low Rician noise level of 3% and 0.9881 for a high noise level of 17%. The addition of Gaussian noise improved PSNR and SSIM, with the results indicating that the proposed method outperformed other models while preserving edges and textures. In summary, the PixSimWave algorithm is a viable noise-elimination approach that employs both sparse wavelet coefficients and regularized similarity with decreased computation time, improving the accuracy of noise reduction in images.
Collapse
|
4
|
Capacity-Achieving Input Distributions of Additive Vector Gaussian Noise Channels: Even-Moment Constraints and Unbounded or Compact Support. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1180. [PMID: 37628210 PMCID: PMC10453642 DOI: 10.3390/e25081180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 08/04/2023] [Accepted: 08/05/2023] [Indexed: 08/27/2023]
Abstract
We investigate the support of a capacity-achieving input to a vector-valued Gaussian noise channel. The input is subjected to a radial even-moment constraint and is either allowed to take any value in Rn or is restricted to a given compact subset of Rn. It is shown that the support of the capacity-achieving distribution is composed of a countable union of submanifolds, each with a dimension of n-1 or less. When the input is restricted to a compact subset of Rn, this union is finite. Finally, the support of the capacity-achieving distribution is shown to have Lebesgue measure 0 and to be nowhere dense in Rn.
Collapse
|
5
|
Sensitivity-based dynamic performance assessment for model predictive control with Gaussian noise. ISA TRANSACTIONS 2023; 139:35-48. [PMID: 37059670 DOI: 10.1016/j.isatra.2023.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 04/03/2023] [Accepted: 04/03/2023] [Indexed: 06/19/2023]
Abstract
Economic model predictive control and tracking model predictive control are two popular advanced process control strategies used in various of fields. Nevertheless, for a given process, which controller should be chosen to achieve better performance is uncertain when noise exists. To this end, a sensitivity-based performance assessment approach is proposed to pre-evaluate the dynamic economic and tracking performance of them and guide the controller selection in this work. First, their controller gains around the optimal steady state are evaluated using the sensitivities of corresponding constrained dynamic programming problems. Second, the controller gains are substituted into the control loop to derive the propagation of process and measurement noise. Subsequently, the Taylor expansion is introduced to simplify the calculation of variance and mean of each variable. Finally, the tracking and economic performance surfaces are plotted and the performance indices are precisely calculated through integrating the objective functions and the probability density functions. Moreover, boundary moving (i.e., back off) and target moving can be pre-configured to guarantee the stability of controlled processes based on the proposed approach. Extensive simulations under different cases prove that the proposed approach can provide useful guidance on performance assessment and controller design.
Collapse
|
6
|
Comparison of Training Strategies for Autoencoder-Based Monochromatic Image Denoising. SENSORS (BASEL, SWITZERLAND) 2023; 23:5538. [PMID: 37420705 PMCID: PMC10305082 DOI: 10.3390/s23125538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 06/07/2023] [Accepted: 06/09/2023] [Indexed: 07/09/2023]
Abstract
Monochromatic images are used mainly in cases where the intensity of the received signal is examined. The identification of the observed objects as well as the estimation of intensity emitted by them depends largely on the precision of light measurement in image pixels. Unfortunately, this type of imaging is often affected by noise, which significantly degrades the quality of the results. In order to reduce it, numerous deterministic algorithms are used, with Non-Local-Means and Block-Matching-3D being the most widespread and treated as the reference point of the current state-of-the-art. Our article focuses on the utilization of machine learning (ML) for the denoising of monochromatic images in multiple data availability scenarios, including those with no access to noise-free data. For this purpose, a simple autoencoder architecture was chosen and checked for various training approaches on two large and widely used image datasets: MNIST and CIFAR-10. The results show that the method of training as well as architecture and the similarity of images within the image dataset significantly affect the ML-based denoising. However, even without access to any clear data, the performance of such algorithms is frequently well above the current state-of-the-art; therefore, they should be considered for monochromatic image denoising.
Collapse
|
7
|
Hybrid Optimization Algorithm Enabled Deep Learning Approach Brain Tumor Segmentation and Classification Using MRI. J Digit Imaging 2023; 36:847-868. [PMID: 36622465 PMCID: PMC10287879 DOI: 10.1007/s10278-022-00752-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 09/16/2022] [Accepted: 12/04/2022] [Indexed: 01/10/2023] Open
Abstract
The unnatural and uncontrolled increase of brain cells is called brain tumors, leading to human health danger. Magnetic resonance imaging (MRI) is widely applied for classifying and detecting brain tumors, due to its better resolution. In general, medical specialists require more details regarding the size, type, and changes in small lesions for effective classification. The timely and exact diagnosis plays a major role in the efficient treatment of patients. Therefore, in this research, an efficient hybrid optimization algorithm is implemented for brain tumor segmentation and classification. The convolutional neural network (CNN) features are extracted to perform a better classification. The classification is performed by considering the extracted features as the input of the deep residual network (DRN), in which the training is performed using the proposed chronological Jaya honey badger algorithm (CJHBA). The proposed CJHBA is the integration of the Jaya algorithm, honey badger algorithm (HBA), and chronological concept. The performance is evaluated using the BRATS 2018 and Figshare datasets, in which the maximum accuracy, sensitivity, and specificity are attained using the BRATS dataset with values 0.9210, 0.9313, and 0.9284, respectively.
Collapse
|
8
|
The effect of Gaussian noise on pneumonia detection on chest radiographs, using convolutional neural networks. Radiography (Lond) 2023; 29:38-43. [PMID: 36274315 DOI: 10.1016/j.radi.2022.09.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 09/26/2022] [Accepted: 09/29/2022] [Indexed: 11/13/2022]
Abstract
INTRODUCTION Chest X-rays (CXR) with under-exposure increase image noise and this may affect convolutional neural network (CNN) performance. This study aimed to train and validate CNNs for classifying pneumonia on CXR as normal or pneumonia acquired at different image noise levels. METHODS The study used the curated and publicly available "Chest X-Ray Pneumonia" dataset of 5856 AP CXR classified into 1583 normal, 4273 viral and bacterial pneumonia cases. Gaussian noise with zero mean was added to the images, at 5 image noise variance levels, corresponding to decreasing exposure. Each noise-level dataset was split into 80% for training, 10% for validation, and 10% for test data and then classified using custom trained sequential CNN architecture. Six classification tasks were developed for five Gaussian noise levels and the original dataset. Sensitivity, specificity, predictive values and accuracy were used as evaluation performance metrics. RESULTS CNN evaluation on the different datasets revealed no performance drop from the original dataset to the five datasets with different noise levels. Sensitivity, specificity and accuracy for the normal datasets were 98.7%, 76.1% and 90.2%. For the five Gaussian noise levels the sensitivity, specificity and accuracy ranged from 96.9% to 98.2%, 94.4%-98.7% and 96.8%-97.6%, respectively. A heat map was used for visual explanation of the CNNs. CONCLUSION The CNNs sensitivity maintained, and the specificity increased in distinguishing between normal and pneumonia CXR with the introduction of image noise. IMPLICATIONS FOR PRACTICE No performance drops of CNNs in distinguishing cases with and without pneumonia CXR with different Gaussian noise levels was observed. This has potential for decreasing radiation dose to patients or maintaining exposure parameters for patients that require additional radiographs.
Collapse
|
9
|
Utility-Privacy Trade-Off in Distributed Machine Learning Systems. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1299. [PMID: 36141185 PMCID: PMC9498028 DOI: 10.3390/e24091299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 09/07/2022] [Accepted: 09/08/2022] [Indexed: 06/16/2023]
Abstract
In distributed machine learning (DML), though clients' data are not directly transmitted to the server for model training, attackers can obtain the sensitive information of clients by analyzing the local gradient parameters uploaded by clients. For this case, we use the differential privacy (DP) mechanism to protect the clients' local parameters. In this paper, from an information-theoretic point of view, we study the utility-privacy trade-off in DML with the help of the DP mechanism. Specifically, three cases including independent clients' local parameters with independent DP noise, dependent clients' local parameters with independent/dependent DP noise are considered. Mutual information and conditional mutual information are used to characterize utility and privacy, respectively. First, we show the relationship between utility and privacy for the three cases. Then, we show the optimal noise variance that achieves the maximal utility under a certain level of privacy. Finally, the results of this paper are further illustrated by numerical results.
Collapse
|
10
|
Denoising for 3D Point Cloud Based on Regularization of a Statistical Low-Dimensional Manifold. SENSORS 2022; 22:s22072666. [PMID: 35408279 PMCID: PMC9002461 DOI: 10.3390/s22072666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 03/15/2022] [Accepted: 03/26/2022] [Indexed: 02/01/2023]
Abstract
A point cloud obtained by stereo matching algorithm or three-dimensional (3D) scanner generally contains much complex noise, which will affect the accuracy of subsequent surface reconstruction or visualization processing. To eliminate the complex noise, a new regularization algorithm for denoising was proposed. In view of the fact that 3D point clouds have low-dimensional structures, a statistical low-dimensional manifold (SLDM) model was established. By regularizing its dimensions, the denoising problem of the point cloud was expressed as an optimization problem based on the geometric constraints of the regularization term of the manifold. A low-dimensional smooth manifold model was constructed by discrete sampling, and solved by means of a statistical method and an alternating iterative method. The performance of the denoising algorithm was quantitatively evaluated from three aspects, i.e., the signal-to-noise ratio (SNR), mean square error (MSE) and structural similarity (SSIM). Analysis and comparison of performance showed that compared with the algebraic point-set surface (APSS), non-local denoising (NLD) and feature graph learning (FGL) algorithms, the mean SNR of the point cloud denoised using the proposed method increased by 1.22 DB, 1.81 DB and 1.20 DB, respectively, its mean MSE decreased by 0.096, 0.086 and 0.076, respectively, and its mean SSIM decreased by 0.023, 0.022 and 0.020, respectively, which shows that the proposed method is more effective in eliminating Gaussian noise and Laplace noise in common point clouds. The application cases showed that the proposed algorithm can retain the geometric feature information of point clouds while eliminating complex noise.
Collapse
|
11
|
Noise and Memristance Variation Tolerance of Single Crossbar Architectures for Neuromorphic Image Recognition. MICROMACHINES 2021; 12:mi12060690. [PMID: 34199202 PMCID: PMC8231790 DOI: 10.3390/mi12060690] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 06/06/2021] [Accepted: 06/10/2021] [Indexed: 11/16/2022]
Abstract
We performed a comparative study on the Gaussian noise and memristance variation tolerance of three crossbar architectures, namely the complementary crossbar architecture, the twin crossbar architecture, and the single crossbar architecture, for neuromorphic image recognition and conducted an experiment to determine the performance of the single crossbar architecture for simple pattern recognition. Ten grayscale images with the size of 32 × 32 pixels were used for testing and comparing the recognition rates of the three architectures. The recognition rates of the three memristor crossbar architectures were compared to each other when the noise level of images was varied from -10 to 4 dB and the percentage of memristance variation was varied from 0% to 40%. The simulation results showed that the single crossbar architecture had the best Gaussian noise input and memristance variation tolerance in terms of recognition rate. At the signal-to-noise ratio of -10 dB, the single crossbar architecture produced a recognition rate of 91%, which was 2% and 87% higher than those of the twin crossbar architecture and the complementary crossbar architecture, respectively. When the memristance variation percentage reached 40%, the single crossbar architecture had a recognition rate as high as 67.8%, which was 1.8% and 9.8% higher than the recognition rates of the twin crossbar architecture and the complementary crossbar architecture, respectively. Finally, we carried out an experiment to determine the performance of the single crossbar architecture with a fabricated 3 × 3 memristor crossbar based on carbon fiber and aluminum film. The experiment proved successful implementation of pattern recognition with the single crossbar architecture.
Collapse
|
12
|
Low Distortion of Noise Filter Realization with 6.34 V/μs Fast Slew Rate and 120 mV p-p Output Noise Signal. SENSORS 2021; 21:s21031008. [PMID: 33540774 PMCID: PMC7867246 DOI: 10.3390/s21031008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 01/28/2021] [Accepted: 01/28/2021] [Indexed: 11/23/2022]
Abstract
In order to reduce Gaussian noise, this paper proposes a method via taking the average of the upper and lower envelopes generated by capturing the high and low peaks of the input signal. The designed fast response filter has no cut-off frequency, so the high order harmonics of the actual signal remain unchanged. Therefore, it can immediately respond to the changes of input signal and retain the integrity of the actual signal. In addition, it has only a small phase delay. The slew rate, phase delay and frequency response can be confirmed from the simulation results of Multisim 13.0. The filter outlined in this article can retain the high order harmonics of the original signal, achieving a slew rate of 6.34 V/μs and an almost zero phase difference. When using our filter to physically test the input signal with a noise level of 3 Vp-p Gaussian noise, a reduced noise signal of 120 mVp-p is obtained. The noise can be suppressed by up to 4% of the raw signal.
Collapse
|
13
|
Abstract
Twin support vector regression (TSVR) is generally employed with ε -insensitive loss function which is not well capable to handle the noises and outliers. According to the definition, Huber loss function performs as quadratic for small errors and linear for others and shows better performance in comparison to Gaussian loss hence it restrains easily for a different type of noises and outliers. Recently, TSVR with Huber loss (HN-TSVR) has been suggested to handle the noise and outliers. Like TSVR, it is also having the singularity problem which degrades the performance of the model. In this paper, regularized version of HN-TSVR is proposed as regularization based twin support vector regression (RHN-TSVR) to avoid the singularity problem of HN-TSVR by applying the structured risk minimization principle that leads to our model convex and well-posed. This proposed RHN-TSVR model is well capable to handle the noise as well as outliers and avoids the singularity issue. To show the validity and applicability of proposed RHN-TSVR, various experiments perform on several artificial generated datasets having uniform, Gaussian and Laplacian noise as well as on benchmark different real-world datasets and compare with support vector regression, TSVR, ε -asymmetric Huber SVR, ε -support vector quantile regression and HN-TSVR. Here, all benchmark real-world datasets are embedded with a different significant level of noise 0%, 5% and 10% on different reported algorithms with the proposed approach. The proposed algorithm RHN-TSVR is showing better prediction ability on artificial datasets as well as real-world datasets with a different significant level of noise compared to other reported models.
Collapse
|
14
|
A Novel Singular Value Decomposition-Based Denoising Method in 4-Dimensional Computed Tomography of the Brain in Stroke Patients with Statistical Evaluation. SENSORS 2020; 20:s20113063. [PMID: 32481740 PMCID: PMC7309118 DOI: 10.3390/s20113063] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 05/24/2020] [Accepted: 05/25/2020] [Indexed: 11/16/2022]
Abstract
Computed tomography (CT) is a widely used medical imaging modality for diagnosing various diseases. Among CT techniques, 4-dimensional CT perfusion (4D-CTP) of the brain is established in most centers for diagnosing strokes and is considered the gold standard for hyperacute stroke diagnosis. However, because the detrimental effects of high radiation doses from 4D-CTP may cause serious health risks in stroke survivors, our research team aimed to introduce a novel image-processing technique. Our singular value decomposition (SVD)-based image-processing technique can improve image quality, first, by separating several image components using SVD and, second, by reconstructing signal component images to remove noise, thereby improving image quality. For the demonstration in this study, 20 4D-CTP dynamic images of suspected acute stroke patients were collected. Both the images that were and were not processed via the proposed method were compared. Each acquired image was objectively evaluated using contrast-to-noise and signal-to-noise ratios. The scores of the parameters assessed for the qualitative evaluation of image quality improved to an excellent rating (p < 0.05). Therefore, our SVD-based image-denoising technique improved the diagnostic value of images by improving their quality. The denoising technique and statistical evaluation can be utilized in various clinical applications to provide advanced medical services.
Collapse
|
15
|
A Novel Approach to 3D-DOA Estimation of Stationary EM Signals Using Convolutional Neural Networks. SENSORS 2020; 20:s20102761. [PMID: 32408661 PMCID: PMC7285076 DOI: 10.3390/s20102761] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 05/04/2020] [Accepted: 05/05/2020] [Indexed: 11/16/2022]
Abstract
This paper proposes a novel three-dimensional direction-of-arrival (3D-DOA) estimation method for electromagnetic (EM) signals using convolutional neural networks (CNN) in a Gaussian or non-Gaussian noise environment. First of all, in the presence of Gaussian noise, four output covariance matrices of the uniform triangular array (UTA) are normalized and then fed into four neural networks for 1D-DOA estimation with identical parameters in parallel; then four 1D-DOA estimations of the UTA can be obtained, and finally, the 3D-DOA estimation could be obtained through post-processing. Secondly, in the presence of non-Gaussian noise, the array output covariance matrices are normalized by the infinity-norm and then processed in Gaussian noise environment; the infinity-norm normalization could effectively suppress impulsive outliers and then provide appropriate input features for the neural network. In addition, the outputs of the neural network are controlled by a signal monitoring network to avoid misjudgments. Comprehensive simulations demonstrate that in Gaussian or non-Gaussian noise environment, the proposed method is superior and effective in computation speed and accuracy in 1D-DOA and 3D-DOA estimations, and the signal monitoring network could also effectively control the neural network outputs. Consequently, we can conclude that CNN has better generalization ability in DOA estimation.
Collapse
|
16
|
Adaptive Cuckoo Search based optimal bilateral filtering for denoising of satellite images. ISA TRANSACTIONS 2020; 100:308-321. [PMID: 31727322 DOI: 10.1016/j.isatra.2019.11.008] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Revised: 09/04/2019] [Accepted: 11/04/2019] [Indexed: 06/10/2023]
Abstract
A satellite image transmitted from satellite to the ground station is corrupted by different kinds of noises such as impulse noise, speckle noise and Gaussian noise. The traditional methods of denoising can remove the noise components but cannot preserve the quality of the image and lead to over-blurring of the edges in the image. To overcome these drawbacks, this paper develops an optimized bilateral filter for image denoising and preserving the edges using different nature inspired optimization algorithms which can effectively denoise the image without blurring the edges in the image. Denoising the image using a bilateral filter requires the decision of the control parameters so that the noise is removed and the edge details are preserved. With the help of optimization algorithms such as Particle Swarm Optimization (PSO), Cuckoo Search (CS) and Adaptive Cuckoo Search (ACS), the control parameters in the bilateral filter are decided for optimal performance. It is observed that the proposed Adaptive Cuckoo Search based bilateral filter denoising gives better results in terms of Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE), Feature Similarity Index (FSIM), Entropy and CPU time in comparison to traditional methods such as Median filter and RGB spatial filter.
Collapse
|
17
|
Detection and classification of ECG noises using decomposition on mixed codebook for quality analysis. Healthc Technol Lett 2020; 7:18-24. [PMID: 32190336 PMCID: PMC7067057 DOI: 10.1049/htl.2019.0096] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2019] [Revised: 09/29/2019] [Accepted: 01/16/2020] [Indexed: 11/19/2022] Open
Abstract
In this Letter, a robust technique is presented to detect and classify different electrocardiogram (ECG) noises including baseline wander (BW), muscle artefact (MA), power line interference (PLI) and additive white Gaussian noise (AWGN) based on signal decomposition on mixed codebooks. These codebooks employ temporal and spectral-bound waveforms which provide sparse representation of ECG signals and can extract ECG local waves as well as ECG noises including BW, PLI, MA and AWGN simultaneously. Further, different statistical approaches and temporal features are applied on decomposed signals for detecting the presence of the above mentioned noises. The accuracy and robustness of the proposed technique are evaluated using a large set of noise-free and noisy ECG signals taken from the Massachusetts Institute of Technology-Boston's Beth Israel Hospital (MIT-BIH) arrhythmia database, MIT-BIH polysmnographic database and Fantasia database. It is shown from the results that the proposed technique achieves an average detection accuracy of above 99% in detecting all kinds of ECG noises. Furthermore, average results show that the technique can achieve an average sensitivity of 98.55%, positive productivity of 98.6% and classification accuracy of 97.19% for ECG signals taken from all three databases.
Collapse
|
18
|
Evaluation of denoising digital breast tomosynthesis data in both projection and image domains and a study of noise model on digital breast tomosynthesis image domain. J Med Imaging (Bellingham) 2019; 6:031410. [PMID: 35834318 DOI: 10.1117/1.jmi.6.3.031410] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Accepted: 01/29/2019] [Indexed: 11/14/2022] Open
Abstract
Digital breast tomosynthesis (DBT) is an imaging technique created to visualize 3-D mammary structures for the purpose of diagnosing breast cancer. This imaging technique is based on the principle of computed tomography. Due to the use of a dangerous ionizing radiation, the "as low as reasonably achievable" (ALARA) principle should be respected, aiming at minimizing the radiation dose to obtain an adequate examination. Thus, a noise filtering method is a fundamental step to achieve the ALARA principle, as the noise level of the image increases as the radiation dose is reduced, making it difficult to analyze the image. In our work, a double denoising approach for DBT is proposed, filtering in both projection (prereconstruction) and image (postreconstruction) domains. First, in the prefiltering step, methods were used for filtering the Poisson noise. To reconstruct the DBT projections, we used the filtered backprojection algorithm. Then, in the postfiltering step, methods were used for filtering Gaussian noise. Experiments were performed on simulated data generated by open virtual clinical trials (OpenVCT) software and on a physical phantom, using several combinations of methods in each domain. Our results showed that double filtering (i.e., in both domains) is not superior to filtering in projection domain only. By investigating the possible reason to explain these results, it was found that the noise model in DBT image domain could be better modeled by a Burr distribution than a Gaussian distribution. Finally, this important contribution can open a research direction in the DBT denoising problem.
Collapse
|
19
|
Actuator fault tolerant control based on probabilistic ultimate bounds. ISA TRANSACTIONS 2019; 84:20-30. [PMID: 30342813 DOI: 10.1016/j.isatra.2018.08.021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2017] [Revised: 06/08/2018] [Accepted: 08/17/2018] [Indexed: 06/08/2023]
Abstract
In this work we introduce a novel set-based fault tolerant control scheme for linear systems under Gaussian disturbances. In the proposed strategy, actuator faults are detected and diagnosed when residual trajectories enter and remain in certain sets that are computed as probabilistic ultimate bounds. After a fault is diagnosed, the control scheme is reconfigured to take into account the corresponding actuator failure and preserve certain closed loop features. We show that our strategy can detect and diagnose the different faults considered with an arbitrarily small probability of misdetection.
Collapse
|
20
|
How does transient signaling input affect the spike timing of postsynaptic neuron near the threshold regime: an analytical study. J Comput Neurosci 2017; 44:147-171. [PMID: 29192377 PMCID: PMC5851711 DOI: 10.1007/s10827-017-0664-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 07/14/2017] [Accepted: 09/11/2017] [Indexed: 11/05/2022]
Abstract
The noisy threshold regime, where even a small set of presynaptic neurons can significantly affect postsynaptic spike-timing, is suggested as a key requisite for computation in neurons with high variability. It also has been proposed that signals under the noisy conditions are successfully transferred by a few strong synapses and/or by an assembly of nearly synchronous synaptic activities. We analytically investigate the impact of a transient signaling input on a leaky integrate-and-fire postsynaptic neuron that receives background noise near the threshold regime. The signaling input models a single strong synapse or a set of synchronous synapses, while the background noise represents a lot of weak synapses. We find an analytic solution that explains how the first-passage time (ISI) density is changed by transient signaling input. The analysis allows us to connect properties of the signaling input like spike timing and amplitude with postsynaptic first-passage time density in a noisy environment. Based on the analytic solution, we calculate the Fisher information with respect to the signaling input’s amplitude. For a wide range of amplitudes, we observe a non-monotonic behavior for the Fisher information as a function of background noise. Moreover, Fisher information non-trivially depends on the signaling input’s amplitude; changing the amplitude, we observe one maximum in the high level of the background noise. The single maximum splits into two maximums in the low noise regime. This finding demonstrates the benefit of the analytic solution in investigating signal transfer by neurons.
Collapse
|
21
|
Noise Correlation Effect on Detection: Signals in Equicorrelated or Autoregressive(1) Gaussian. IEEE SIGNAL PROCESSING LETTERS 2017; 24:1078-1082. [PMID: 28966543 PMCID: PMC5619669 DOI: 10.1109/lsp.2017.2702004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this letter, we consider the effect of noise correlation on the error performance of binary hypothesis signal detection, when one of two deterministic signals is received in correlated Gaussian noise. For the likelihood ratio detection scheme, analytical performance results are derived for equicorrelated and autoregressive order one models. Although it is known previously that the best signal lies in the direction of eigenvector corresponding to the minimum eigenvalue of the noise covariance matrix, our investigation of the variation of mean signal-to-noise power ratio as a function of correlation parameter (i) shows how correlation leads to increased probability of error up to a point, beyond which monotonic decrease in error probability with increasing correlation is possible and (ii) provides a max-min signal design solution for the unknown correlation parameter case. Numerical results are also included for some specific signals.
Collapse
|
22
|
Nonlinear least squares regression for single image scanning electron microscope signal-to-noise ratio estimation. J Microsc 2016; 264:159-174. [PMID: 27238911 DOI: 10.1111/jmi.12425] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Revised: 03/30/2016] [Accepted: 04/26/2016] [Indexed: 11/28/2022]
Abstract
A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods.
Collapse
|
23
|
Trade-off performance analysis of LTI system with channel energy constraint. ISA TRANSACTIONS 2016; 65:88-95. [PMID: 27473213 DOI: 10.1016/j.isatra.2016.07.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2015] [Revised: 06/15/2016] [Accepted: 07/04/2016] [Indexed: 06/06/2023]
Abstract
In this paper, the trade-off performance between tracking error, control input energy and channel input power is studied. By modelling the communication channel as the additive coloured Gaussian noise channel (ACGN) with limited bandwidth, a new performance index is proposed and minimized over all stabilizing two-degree-of-freedom controllers. The results show that the trade-off performance is correlated to the intrinsic characteristics of the plant, including the locations and directions of the unstable pole, non-minimum phase zero. However it is unrelated to the non-minimum phase zeros of filter because of the two-degree-of-freedom controller. We also demonstrated that ACGN may degenerate the tracking performance. Finally, a typical example is given to validate the theoretical results.
Collapse
|
24
|
Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function. Healthc Technol Lett 2016; 3:67-71. [PMID: 27222723 DOI: 10.1049/htl.2015.0007] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2015] [Revised: 07/31/2015] [Accepted: 08/21/2015] [Indexed: 11/19/2022] Open
Abstract
Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications.
Collapse
|
25
|
A comparison of delayed self-heterodyne interference measurement of laser linewidth using Mach-Zehnder and Michelson interferometers. SENSORS 2011; 11:9233-41. [PMID: 22163692 PMCID: PMC3231253 DOI: 10.3390/s111009233] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/17/2011] [Revised: 09/13/2011] [Accepted: 09/23/2011] [Indexed: 11/17/2022]
Abstract
Linewidth measurements of a distributed feedback (DFB) fibre laser are made using delayed self heterodyne interferometry (DHSI) with both Mach-Zehnder and Michelson interferometer configurations. Voigt fitting is used to extract and compare the Lorentzian and Gaussian linewidths and associated sources of noise. The respective measurements are w(L) (MZI) = (1.6 ± 0.2) kHz and w(L) (MI) = (1.4 ± 0.1) kHz. The Michelson with Faraday rotator mirrors gives a slightly narrower linewidth with significantly reduced error. This is explained by the unscrambling of polarisation drift using the Faraday rotator mirrors, confirmed by comparing with non-rotating standard gold coated fibre end mirrors.
Collapse
|