1
|
Mao J, Sun L, Chen J, Yu S. Overview of Research on Digital Image Denoising Methods. SENSORS (BASEL, SWITZERLAND) 2025; 25:2615. [PMID: 40285303 PMCID: PMC12031399 DOI: 10.3390/s25082615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2025] [Revised: 04/16/2025] [Accepted: 04/18/2025] [Indexed: 04/29/2025]
Abstract
During image collection, images are often polluted by noise because of imaging conditions and equipment limitations. Images are also disturbed by external noise during compression and transmission, which adversely affects consequent processing, like image segmentation, target recognition, and text detection. A two-dimensional amplitude image is one of the most common image categories, which is widely used in people's daily life and work. Research on this kind of image-denoising algorithm is a hotspot in the field of image denoising. Conventional denoising methods mainly use the nonlocal self-similarity of images and sparser representatives in the converted domain for image denoising. In particular, the three-dimensional block matching filtering (BM3D) algorithm not only effectively removes the image noise but also better retains the detailed information in the image. As artificial intelligence develops, the deep learning-based image-denoising method has become an important research direction. This review provides a general overview and comparison of traditional image-denoising methods and deep neural network-based image-denoising methods. First, the essential framework of classic traditional denoising and deep neural network denoising approaches is presented, and the denoising approaches are classified and summarized. Then, existing denoising methods are compared with quantitative and qualitative analyses on a public denoising dataset. Finally, we point out some potential challenges and directions for future research in the field of image denoising. This review can help researchers clearly understand the differences between various image-denoising algorithms, which not only helps them to choose suitable algorithms or improve and innovate on this basis but also provides research ideas and directions for subsequent research in this field.
Collapse
Affiliation(s)
- Jing Mao
- Graduate School of Environmental Engineering, The University of Kitakyushu, Kitakyushu 808-0135, Japan
| | - Lianming Sun
- Department of Information Systems Engineering, The University of Kitakyushu, Kitakyushu 808-0135, Japan;
| | - Jie Chen
- School of Electronic and Information Engineering, Ankang University, Ankang 725000, China
| | - Shunyuan Yu
- School of Electronic and Information Engineering, Ankang University, Ankang 725000, China
| |
Collapse
|
2
|
Janjušević N, Khalilian-Gourtani A, Flinker A, Feng L, Wang Y. GroupCDL: Interpretable Denoising and Compressed Sensing MRI via Learned Group-Sparsity and Circulant Attention. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2025; 11:201-212. [PMID: 40124211 PMCID: PMC11928013 DOI: 10.1109/tci.2025.3539021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/25/2025]
Abstract
Nonlocal self-similarity within images has become an increasingly popular prior in deep-learning models. Despite their successful image restoration performance, such models remain largely uninterpretable due to their black-box construction. Our previous studies have shown that interpretable construction of a fully convolutional denoiser (CDLNet), with performance on par with state-of-the-art black-box counterparts, is achievable by unrolling a convolutional dictionary learning algorithm. In this manuscript, we seek an interpretable construction of a convolutional network with a nonlocal self-similarity prior that performs on par with black-box nonlocal models. We show that such an architecture can be effectively achieved by up-grading theℓ 1 sparsity prior (soft-thresholding) of CDLNet to an image-adaptive group-sparsity prior (group-thresholding). The proposed learned group-thresholding makes use of nonlocal attention to perform spatially varying soft-thresholding on the latent representation. To enable effective training and inference on large images with global artifacts, we propose a novel circulant-sparse attention. We achieve competitive natural-image denoising performance compared to black-box nonlocal DNNs and transformers. The interpretable construction of our network allows for a straightforward extension to Compressed Sensing MRI (CS-MRI), yielding state-of-the-art performance. Lastly, we show robustness to noise-level mismatches between training and inference for denoising and CS-MRI reconstruction.
Collapse
Affiliation(s)
- Nikola Janjušević
- New York University Tandon School of Engineering, Electrical and Computer Engineering Department, Brooklyn, NY 11201, USA
- New York University Grossman School of Medicine, Radiology Department, New York, NY 10016, USA
| | | | - Adeen Flinker
- New York University Grossman School of Medicine, Neurology Department, New York, NY 10016, USA
| | - Li Feng
- New York University Grossman School of Medicine, Radiology Department, New York, NY 10016, USA
| | - Yao Wang
- New York University Tandon School of Engineering, Electrical and Computer Engineering Department, Brooklyn, NY 11201, USA
| |
Collapse
|
3
|
Tiantian W, Hu Z, Guan Y. An efficient lightweight network for image denoising using progressive residual and convolutional attention feature fusion. Sci Rep 2024; 14:9554. [PMID: 38664440 PMCID: PMC11045760 DOI: 10.1038/s41598-024-60139-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 04/19/2024] [Indexed: 04/28/2024] Open
Abstract
While deep learning has become the go-to method for image denoising due to its impressive noise removal capabilities, excessive network depth often plagues existing approaches, leading to significant computational burdens. To address this critical bottleneck, we propose a novel lightweight progressive residual and attention mechanism fusion network that effectively alleviates these limitations. This architecture tackles both Gaussian and real-world image noise with exceptional efficacy. Initiated through dense blocks (DB) tasked with discerning the noise distribution, this approach substantially reduces network parameters while comprehensively extracting local image features. The network then adopts a progressive strategy, whereby shallow convolutional features are incrementally integrated with deeper features, establishing a residual fusion framework adept at extracting encompassing global features relevant to noise characteristics. The process concludes by integrating the output feature maps from each DB and the robust edge features from the convolutional attention feature fusion module (CAFFM). These combined elements are then directed to the reconstruction layer, ultimately producing the final denoised image. Empirical analyses conducted in environments characterized by Gaussian white noise and natural noise, spanning noise levels 15-50, indicate a marked enhancement in performance. This assertion is quantitatively corroborated by increased average values in metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Feature Similarity Index for Color images (FSIMc), outperforming the outcomes of more than 20 existing methods across six varied datasets. Collectively, the network delineated in this research exhibits exceptional adeptness in image denoising. Simultaneously, it adeptly preserves essential image features such as edges and textures, thereby signifying a notable progression in the domain of image processing. The proposed model finds applicability in a range of image-centric domains, encompassing image processing, computer vision, video analysis, and pattern recognition.
Collapse
Affiliation(s)
- Wang Tiantian
- School of Computer and Software Engineering, Sias University, Zhengzhou, 451150, Henan, China
| | - Zhihua Hu
- School of Computer, Huanggang Normal University, Huanggang, 438000, Hubei, China.
| | - Yurong Guan
- School of Computer, Huanggang Normal University, Huanggang, 438000, Hubei, China.
| |
Collapse
|
4
|
Shi Z, Kong F, Cheng M, Cao H, Ouyang S, Cao Q. Multi-energy CT material decomposition using graph model improved CNN. Med Biol Eng Comput 2024; 62:1213-1228. [PMID: 38159238 DOI: 10.1007/s11517-023-02986-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 11/30/2023] [Indexed: 01/03/2024]
Abstract
In spectral CT imaging, the coefficient image of the basis material obtained by the material decomposition technique can estimate the tissue composition, and its accuracy directly affects the disease diagnosis. Although the precision of material decomposition is increased by employing convolutional neural networks (CNN), extracting the non-local features from the CT image is restricted using the traditional CNN convolution operator. A graph model built by multi-scale non-local self-similar patterns is introduced into multi-material decomposition (MMD). We proposed a novel MMD method based on graph edge-conditioned convolution U-net (GECCU-net) to enhance material image quality. The GECCU-net focuses on developing a multi-scale encoder. At the network coding stage, three paths are applied to capture comprehensive image features. The local and non-local feature aggregation (LNFA) blocks are designed to integrate the local and non-local features from different paths. The graph edge-conditioned convolution based on non-Euclidean space excavates the non-local features. A hybrid loss function is defined to accommodate multi-scale input images and avoid over-smoothing of results. The proposed network is compared quantitatively with base CNN models on the simulated and real datasets. The material images generated by GECCU-net have less noise and artifacts while retaining more information on tissue. The Structural SIMilarity (SSIM) of the obtained abdomen and chest water maps reaches 0.9976 and 0.9990, respectively, and the RMSE reduces to 0.1218 and 0.4903 g/cm3. The proposed method can improve MMD performance and has potential applications.
Collapse
Affiliation(s)
- Zaifeng Shi
- School of Microelectronics, Tianjin University, Tianjin, 300072, China.
- Tianjin Key Laboratory of Imaging and Sensing Microelectronic Technology, Tianjin, China.
| | - Fanning Kong
- School of Microelectronics, Tianjin University, Tianjin, 300072, China
| | - Ming Cheng
- School of Microelectronics, Tianjin University, Tianjin, 300072, China
| | - Huaisheng Cao
- School of Microelectronics, Tianjin University, Tianjin, 300072, China
| | - Shunxin Ouyang
- School of Microelectronics, Tianjin University, Tianjin, 300072, China
| | - Qingjie Cao
- School of Mathematical Sciences, Tianjin Normal University, Tianjin, 300387, China
| |
Collapse
|
5
|
Işıl Ç, Gan T, Ardic FO, Mentesoglu K, Digani J, Karaca H, Chen H, Li J, Mengu D, Jarrahi M, Akşit K, Ozcan A. All-optical image denoising using a diffractive visual processor. LIGHT, SCIENCE & APPLICATIONS 2024; 13:43. [PMID: 38310118 PMCID: PMC10838318 DOI: 10.1038/s41377-024-01385-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 01/14/2024] [Accepted: 01/15/2024] [Indexed: 02/05/2024]
Abstract
Image denoising, one of the essential inverse problems, targets to remove noise/artifacts from input images. In general, digital image denoising algorithms, executed on computers, present latency due to several iterations implemented in, e.g., graphics processing units (GPUs). While deep learning-enabled methods can operate non-iteratively, they also introduce latency and impose a significant computational burden, leading to increased power consumption. Here, we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images - implemented at the speed of light propagation within a thin diffractive visual processor that axially spans <250 × λ, where λ is the wavelength of light. This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features, causing them to miss the output image Field-of-View (FoV) while retaining the object features of interest. Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of ~30-40%. We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum. Owing to their speed, power-efficiency, and minimal computational overhead, all-optical diffractive denoisers can be transformative for various image display and projection systems, including, e.g., holographic displays.
Collapse
Affiliation(s)
- Çağatay Işıl
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Tianyi Gan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Fazil Onuralp Ardic
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Koray Mentesoglu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Jagrit Digani
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Huseyin Karaca
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Hanlong Chen
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Deniz Mengu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Kaan Akşit
- University College London, Department of Computer Science, London, United Kingdom
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
6
|
Xu Z, Zhao H, Zheng Y, Guo H, Li S, Lyu Z. A dual nonsubsampled contourlet network for synthesis images and infrared thermal images denoising. PeerJ Comput Sci 2024; 10:e1817. [PMID: 39669470 PMCID: PMC11636703 DOI: 10.7717/peerj-cs.1817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 12/19/2023] [Indexed: 12/14/2024]
Abstract
The most direct way to find the electrical switchgear fault is to use infrared thermal imaging technology for temperature measurement. However, infrared thermal imaging images are usually polluted by noise, and there are problems such as low contrast and blurred edges. To solve these problems, this article proposes a dual convolutional neural network model based on nonsubsampled contourlet transform (NSCT). First, the overall structure of the model is made wider by combining the two networks. Compared with the deeper convolutional neural network, the dual convolutional neural network (CNN) improves the denoising performance without increasing the computational cost too much. Secondly, the model uses NSCT and inverse NSCT to obtain more texture information and avoid the gridding effect. It achieves a good balance between noise reduction performance and detail retention. A large number of simulation experiments show that the model has the ability to deal with synthetic noise and real noise, which has high practical value.
Collapse
Affiliation(s)
- Zhendong Xu
- State Grib Jilin Electric Power Co., Ltd, Liaoyuan Power Supply Company, Liaoyuan, China
| | - Hongdan Zhao
- State Grib Jilin Electric Power Co., Ltd, Liaoyuan Power Supply Company, Liaoyuan, China
| | - Yu Zheng
- State Grib Jilin Electric Power Co., Ltd, Liaoyuan Power Supply Company, Liaoyuan, China
| | - Hongbo Guo
- State Grib Jilin Electric Power Co., Ltd, Liaoyuan Power Supply Company, Liaoyuan, China
| | - Shengyang Li
- State Grib Jilin Electric Power Co., Ltd, Liaoyuan Power Supply Company, Liaoyuan, China
| | - Zhiyu Lyu
- School of Automation Engineering, Northeast Electric Power University, Jilin, China
| |
Collapse
|
7
|
Xue X, Ji D, Xu C, Zhao Y, Li Y, Hu C. Adaptive orthogonal directional total variation with kernel regression for CT image denoising. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:1253-1271. [PMID: 38995759 DOI: 10.3233/xst-230416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/14/2024]
Abstract
BACKGROUND Low-dose computed tomography (CT) has been successful in reducing radiation exposure for patients. However, the use of reconstructions from sparse angle sampling in low-dose CT often leads to severe streak artifacts in the reconstructed images. OBJECTIVE In order to address this issue and preserve image edge details, this study proposes an adaptive orthogonal directional total variation method with kernel regression. METHODS The CT reconstructed images are initially processed through kernel regression to obtain the N-term Taylor series, which serves as a local representation of the regression function. By expanding the series to the second order, we obtain the desired estimate of the regression function and localized information on the first and second derivatives. To mitigate the noise impact on these derivatives, kernel regression is performed again to update the first and second derivatives. Subsequently, the original reconstructed image, its local approximation, and the updated derivatives are summed using a weighting scheme to derive the image used for calculating orientation information. For further removal of stripe artifacts, the study introduces the adaptive orthogonal directional total variation (AODTV) method, which denoises along both the edge direction and the normal direction, guided by the previously obtained orientation. RESULTS Both simulation and real experiments have obtained good results. The results of two real experiments show that the proposed method has obtained PSNR values of 34.5408 dB and 29.4634 dB, which are 1.2392-5.9333 dB and 2.828-6.7995 dB higher than the contrast denoising algorithm, respectively, indicating that the proposed method has good denoising performance. CONCLUSIONS The study demonstrates the effectiveness of the method in eliminating strip artifacts and preserving the fine details of the images.
Collapse
Affiliation(s)
- Xiying Xue
- School of Science, Tianjin University of Technology and Education, Tianjin, China
| | - Dongjiang Ji
- School of Science, Tianjin University of Technology and Education, Tianjin, China
| | - Chunyu Xu
- School of Science, Tianjin University of Technology and Education, Tianjin, China
| | - Yuqing Zhao
- School of Biomedical Engineering and Technology, Tianjin Medical University, Tianjin, China
| | - Yimin Li
- School of Biomedical Engineering and Technology, Tianjin Medical University, Tianjin, China
| | - Chunhong Hu
- School of Biomedical Engineering and Technology, Tianjin Medical University, Tianjin, China
| |
Collapse
|
8
|
Liu Y, Yang L, Ma H, Mei S. Adaptive filter method in Bendlet domain for biological slice images. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:11116-11138. [PMID: 37322974 DOI: 10.3934/mbe.2023492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
The biological cross-sectional images majorly consist of closed-loop structures, which are suitable to be represented by the second-order shearlet system with curvature (Bendlet). In this study, an adaptive filter method for preserving textures in the bendlet domain is proposed. The Bendlet system represents the original image as an image feature database based on image size and Bendlet parameters. This database can be divided into image high-frequency and low-frequency sub-bands separately. The low-frequency sub-bands adequately represent the closed-loop structure of the cross-sectional images and the high-frequency sub-bands accurately represent the detailed textural features of the images, which reflect the characteristics of Bendlet and can be effectively distinguished from the Shearlet system. The proposed method takes full advantage of this feature, then selects the appropriate thresholds based on the images' texture distribution characteristics in the database to eliminate noise. The locust slice images are taken as an example to test the proposed method. The experimental results show that the proposed method can significantly eliminate the low-level Gaussian noise and protect the image information compared with other popular denoising algorithms. The PSNR and SSIM obtained are better than other methods. The proposed algorithm can be effectively applied to other biological cross-sectional images.
Collapse
Affiliation(s)
- Yafei Liu
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| | - Linqiang Yang
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| | - Hongmei Ma
- Yantai Research Institute, China Agricultural University, Yantai 264670, China
| | - Shuli Mei
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| |
Collapse
|
9
|
Ektefaie Y, Dasoulas G, Noori A, Farhat M, Zitnik M. Multimodal learning with graphs. NAT MACH INTELL 2023; 5:340-350. [PMID: 38076673 PMCID: PMC10704992 DOI: 10.1038/s42256-023-00624-6] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 02/01/2023] [Indexed: 04/05/2023]
Abstract
Artificial intelligence for graphs has achieved remarkable success in modeling complex systems, ranging from dynamic networks in biology to interacting particle systems in physics. However, the increasingly heterogeneous graph datasets call for multimodal methods that can combine different inductive biases-the set of assumptions that algorithms use to make predictions for inputs they have not encountered during training. Learning on multimodal datasets presents fundamental challenges because the inductive biases can vary by data modality and graphs might not be explicitly given in the input. To address these challenges, multimodal graph AI methods combine different modalities while leveraging cross-modal dependencies using graphs. Diverse datasets are combined using graphs and fed into sophisticated multimodal architectures, specified as image-intensive, knowledge-grounded and language-intensive models. Using this categorization, we introduce a blueprint for multimodal graph learning, use it to study existing methods and provide guidelines to design new models.
Collapse
Affiliation(s)
- Yasha Ektefaie
- Bioinformatics and Integrative Genomics Program, Harvard Medical School, Boston, MA 02115, USA
- Department of Biomedical Informatics, Harvard University, Boston, MA 02115, USA
| | - George Dasoulas
- Department of Biomedical Informatics, Harvard University, Boston, MA 02115, USA
- Harvard Data Science Initiative, Cambridge, MA 02138, USA
| | - Ayush Noori
- Department of Biomedical Informatics, Harvard University, Boston, MA 02115, USA
- Harvard College, Cambridge, MA 02138, USA
| | - Maha Farhat
- Department of Biomedical Informatics, Harvard University, Boston, MA 02115, USA
- Division of Pulmonary and Critical Care, Department of Medicine, Massachusetts General Hospital, Boston, MA, USA
| | - Marinka Zitnik
- Department of Biomedical Informatics, Harvard University, Boston, MA 02115, USA
- Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA
- Harvard Data Science Initiative, Cambridge, MA 02138, USA
| |
Collapse
|
10
|
Zhai Q, Li X, Yang F, Jiao Z, Luo P, Cheng H, Liu Z. MGL: Mutual Graph Learning for Camouflaged Object Detection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:1897-1910. [PMID: 36417725 DOI: 10.1109/tip.2022.3223216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Camouflaged object detection, which aims to detect/segment the object(s) that blend in with their surrounding, remains challenging for deep models due to the intrinsic similarities between foreground objects and background surroundings. Ideally, an effective model should be capable of finding valuable clues from the given scene and integrating them into a joint learning framework to co-enhance the representation. Inspired by this observation, we propose a novel Mutual Graph Learning (MGL) model by shifting the conventional perspective of mutual learning from regular grids to graph domain. Specifically, an image is decoupled by MGL into two task-specific feature maps - one for finding the rough location of the target and the other for capturing its accurate boundary details. Then, the mutual benefits can be fully exploited by reasoning their high-order relations through graphs recurrently. It should be noted that our method is different from most mutual learning models that model all between-task interactions with the use of a shared function. To increase information interactions, MGL is built with typed functions for dealing with different complementary relations. To overcome the accuracy loss caused by interpolation to higher resolution and the computational redundancy resulting from recurrent learning, the S-MGL is equipped with a multi-source attention contextual recovery module, called R-MGL_v2, which uses the pixel feature information iteratively. Experiments on challenging datasets, including CHAMELEON, CAMO, COD10K, and NC4K demonstrate the effectiveness of our MGL with superior performance to existing state-of-the-art methods. The code can be found at https://github.com/fanyang587/MGL.
Collapse
|
11
|
Chen Q, Wang Y, Geng Z, Wang Y, Yang J, Lin Z. Equilibrium Image Denoising With Implicit Differentiation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:1868-1881. [PMID: 37028348 DOI: 10.1109/tip.2023.3255104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Recent efforts on learning-based image denoising approaches use unrolled architectures with a fixed number of repeatedly stacked blocks. However, due to difficulties in training networks corresponding to deeper layers, simply stacking blocks may cause performance degradation, and the number of unrolled blocks needs to be manually tuned to find an appropriate value. To circumvent these problems, this paper describes an alternative approach with implicit models. To our best knowledge, our approach is the first attempt to model iterative image denoising through an implicit scheme. The model employs implicit differentiation to calculate gradients in the backward pass, thus avoiding the training difficulties of explicit models and elaborate selection of the iteration number. Our model is parameter-efficient and has only one implicit layer, which is a fixed-point equation that casts the desired noise feature as its solution. By simulating infinite iterations of the model, the final denoising result is given by the equilibrium that is achieved through accelerated black-box solvers. The implicit layer not only captures the non-local self-similarity prior for image denoising, but also facilitates training stability and thereby boosts the denoising performance. Extensive experiments show that our model leads to better performances than state-of-the-art explicit denoisers with enhanced qualitative and quantitative results.
Collapse
|
12
|
Image denoising in the deep learning era. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10305-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
13
|
Liu H, Liao P, Chen H, Zhang Y. ERA-WGAT: Edge-enhanced residual autoencoder with a window-based graph attention convolutional network for low-dose CT denoising. BIOMEDICAL OPTICS EXPRESS 2022; 13:5775-5793. [PMID: 36733738 PMCID: PMC9872905 DOI: 10.1364/boe.471340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 09/03/2022] [Accepted: 09/19/2022] [Indexed: 06/18/2023]
Abstract
Computed tomography (CT) has become a powerful tool for medical diagnosis. However, minimizing X-ray radiation risk for the patient poses significant challenges to obtain suitable low dose CT images. Although various low-dose CT methods using deep learning techniques have produced impressive results, convolutional neural network based methods focus more on local information and hence are very limited for non-local information extraction. This paper proposes ERA-WGAT, a residual autoencoder incorporating an edge enhancement module that performs convolution with eight types of learnable operators providing rich edge information and a window-based graph attention convolutional network that combines static and dynamic attention modules to explore non-local self-similarity. We use the compound loss function that combines MSE loss and multi-scale perceptual loss to mitigate the over-smoothing problem. Compared with current low-dose CT denoising methods, ERA-WGAT confirmed superior noise suppression and perceived image quality.
Collapse
Affiliation(s)
- Han Liu
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Peixi Liao
- Department of Scientific Research and Education, The Sixth People’s Hospital of Chengdu, Chengdu 610051, China
| | - Hu Chen
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Yi Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| |
Collapse
|
14
|
Enhanced statistical nearest neighbors with steerable pyramid transform for Gaussian noise removal in a color image. EVOLUTIONARY INTELLIGENCE 2022. [DOI: 10.1007/s12065-021-00627-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
15
|
Li Y, Zhang Y, Cui W, Lei B, Kuang X, Zhang T. Dual Encoder-Based Dynamic-Channel Graph Convolutional Network With Edge Enhancement for Retinal Vessel Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1975-1989. [PMID: 35167444 DOI: 10.1109/tmi.2022.3151666] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Retinal vessel segmentation with deep learning technology is a crucial auxiliary method for clinicians to diagnose fundus diseases. However, the deep learning approaches inevitably lose the edge information, which contains spatial features of vessels while performing down-sampling, leading to the limited segmentation performance of fine blood vessels. Furthermore, the existing methods ignore the dynamic topological correlations among feature maps in the deep learning framework, resulting in the inefficient capture of the channel characterization. To address these limitations, we propose a novel dual encoder-based dynamic-channel graph convolutional network with edge enhancement (DE-DCGCN-EE) for retinal vessel segmentation. Specifically, we first design an edge detection-based dual encoder to preserve the edge of vessels in down-sampling. Secondly, we investigate a dynamic-channel graph convolutional network to map the image channels to the topological space and synthesize the features of each channel on the topological map, which solves the limitation of insufficient channel information utilization. Finally, we study an edge enhancement block, aiming to fuse the edge and spatial features in the dual encoder, which is beneficial to improve the accuracy of fine blood vessel segmentation. Competitive experimental results on five retinal image datasets validate the efficacy of the proposed DE-DCGCN-EE, which achieves more remarkable segmentation results against the other state-of-the-art methods, indicating its potential clinical application.
Collapse
|
16
|
Cloud Removal with SAR-Optical Data Fusion and Graph-Based Feature Aggregation Network. REMOTE SENSING 2022. [DOI: 10.3390/rs14143374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In observations of Earth, the existence of clouds affects the quality and usability of optical remote sensing images in practical applications. Many cloud removal methods have been proposed to solve this issue. Among these methods, synthetic aperture radar (SAR)-based methods have more potential than others because SAR imaging is hardly affected by clouds, and can reflect ground information differences and changes. While SAR images used as auxiliary information for cloud removal may be blurred and noisy, the similar non-local information of spectral and electromagnetic features cannot be effectively utilized by traditional cloud removal methods. To overcome these weaknesses, we propose a novel cloud removal method using SAR-optical data fusion and a graph-based feature aggregation network (G-FAN). First, cloudy optical images and contemporary SAR images are concatenated and transformed into hyper-feature maps by pre-convolution. Second, the hyper-feature maps are inputted into the G-FAN to reconstruct the missing data of the cloud-covered area by aggregating the electromagnetic backscattering information of the SAR image, and the spectral information of neighborhood and non-neighborhood pixels in the optical image. Finally, post-convolution and a long skip connection are adopted to reconstruct the final predicted cloud-free images. Both the qualitative and quantitative experimental results from the simulated data and real data experiments show that our proposed method outperforms traditional deep learning methods for cloud removal.
Collapse
|
17
|
Zhang H, Lian Q, Zhao J, Wang Y, Yang Y, Feng S. RatUNet: residual U-Net based on attention mechanism for image denoising. PeerJ Comput Sci 2022; 8:e970. [PMID: 35634105 PMCID: PMC9138094 DOI: 10.7717/peerj-cs.970] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 04/11/2022] [Indexed: 06/15/2023]
Abstract
Deep convolutional neural networks (CNNs) have been very successful in image denoising. However, with the growth of the depth of plain networks, CNNs may result in performance degradation. The lack of network depth leads to the limited ability of the network to extract image features and difficults to fuse the shallow image features into the deep image information. In this work, we propose an improved deep convolutional U-Net framework (RatUNet) for image denoising. RatUNet improves Unet as follows: (1) RatUNet uses the residual blocks of ResNet to deepen the network depth, so as to avoid the network performance saturation. (2) RatUNet improves the down-sampling method, which is conducive to extracting image features. (3) RatUNet improves the up-sampling method, which is used to restore image details. (4) RatUNet improves the skip-connection method of the U-Net network, which is used to fuse the shallow feature information into the deep image details, and it is more conducive to restore the clean image. (5) In order to better process the edge information of the image, RatUNet uses depthwise and polarized self-attention mechanism to guide a CNN for image denoising. Extensive experiments show that our RatUNet is more efficient and has better performance than existing state-of-the-art denoising methods, especially in SSIM metrics, the denoising effect of the RatUNet achieves very high performance. Visualization results show that the denoised image by RatUNet is smoother and sharper than other methods.
Collapse
Affiliation(s)
- Huibin Zhang
- Institute of Information Science and Technology, Yanshan University, Qinhuang Dao, Hebei Province, China
- Computer Department, Xinzhou Teachers University, Xinzhou, Shanxi Province, China
| | - Qiusheng Lian
- Institute of Information Science and Technology, Yanshan University, Qinhuang Dao, Hebei Province, China
- Hebei Key Laboratory of Information Transmission and Signal Processing, Yanshan University, Qin Huangdao, Hebei Province, China
| | - Jianmin Zhao
- Institute of Information Science and Technology, Yanshan University, Qinhuang Dao, Hebei Province, China
- School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, Inner Mongolia Province, China
| | - Yining Wang
- Computer Department, Xinzhou Teachers University, Xinzhou, Shanxi Province, China
| | - Yuchi Yang
- Institute of Information Science and Technology, Yanshan University, Qinhuang Dao, Hebei Province, China
- Hebei Key Laboratory of Information Transmission and Signal Processing, Yanshan University, Qin Huangdao, Hebei Province, China
| | - Suqin Feng
- Computer Department, Xinzhou Teachers University, Xinzhou, Shanxi Province, China
| |
Collapse
|
18
|
Li D, Bai Y, Bai Z, Li Y, Shang C, Shen Q. Decomposed Neural Architecture Search for image denoising. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
19
|
Zhao L, Zhu Q. Image denoising algorithm of social network based on multifeature fusion. JOURNAL OF INTELLIGENT SYSTEMS 2022. [DOI: 10.1515/jisys-2022-0019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
A social network image denoising algorithm based on multifeature fusion is proposed. Based on the multifeature fusion theory, the process of social network image denoising is regarded as the fitting process of neural network, and a simple and efficient convolution neural structure of multifeature fusion is constructed for image denoising. The gray features of social network image are collected, and the gray values are denoising and cleaning. Based on the image features, multiple denoising is carried out to ensure the accuracy of social network image denoising algorithm and improve the accuracy of image processing. Experiments show that the average noise of the image processed by the algorithm designed in this study is reduced by 8.6905 dB, which is much larger than that of other methods, and the signal-to-noise ratio of the output image is high, which is maintained at about 30 dB, which has a high effect in the process of practical application.
Collapse
Affiliation(s)
- Lanfei Zhao
- College of Intelligent Systems Science and Engineering, Harbin Engineering University , Harbin 150000 , China
| | - Qidan Zhu
- College of Intelligent Systems Science and Engineering, Harbin Engineering University , Harbin 150000 , China
| |
Collapse
|
20
|
Ma R, Li S, Zhang B, Hu H. Meta PID Attention Network for Flexible and Efficient Real-World Noisy Image Denoising. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:2053-2066. [PMID: 35167451 DOI: 10.1109/tip.2022.3150294] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Recent deep convolutional neural networks for real-world noisy image denoising have shown a huge boost in performance by training a well-engineered network over external image pairs. However, most of these methods are generally trained with supervision. Once the testing data is no longer compatible with the training conditions, they can exhibit poor generalization and easily result in severe overfitting or degrading performances. To tackle this barrier, we propose a novel denoising algorithm, dubbed as Meta PID Attention Network (MPA-Net). Our MPA-Net is built based upon stacking Meta PID Attention Modules (MPAMs). In each MPAM, we utilize a second-order attention module (SAM) to exploit the channel-wise feature correlations with second-order statistics, which are then adaptively updated via a proportional-integral-derivative (PID) guided meta-learning framework. This learning framework exerts the unique property of the PID controller and meta-learning scheme to dynamically generate filter weights for beneficial update of the extracted features within a feedback control system. Moreover, the dynamic nature of the framework enables the generated weights to be flexibly tweaked according to the input at test time. Thus, MPAM not only achieves discriminative feature learning, but also facilitates a robust generalization ability on distinct noises for real images. Extensive experiments on ten datasets are conducted to inspect the effectiveness of the proposed MPA-Net quantitatively and qualitatively, which demonstrates both its superior denoising performance and promising generalization ability that goes beyond those of the state-of-the-art denoising methods.
Collapse
|
21
|
|
22
|
Jin X, Lai Z, Jin Z. Learning Dynamic Relationships for Facial Expression Recognition Based on Graph Convolutional Network. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:7143-7155. [PMID: 34370664 DOI: 10.1109/tip.2021.3101820] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Facial action units (AUs) analysis plays an important role in facial expression recognition (FER). Existing deep spectral convolutional networks (DSCNs) have made encouraging performance for FER based on a set of facial local regions and a predefined graph structure. However, these regions do not have close relationships to AUs, and DSCNs cannot model the dynamic spatial dependencies of these regions for estimating different facial expressions. To tackle these issues, we propose a novel double dynamic relationships graph convolutional network (DDRGCN) to learn the strength of the edges in the facial graph by a trainable weighted adjacency matrix. We construct facial graph data by 20 regions of interest (ROIs) guided by different facial AUs. Furthermore, we devise an efficient graph convolutional network in which the inherent dependencies of vertices in the facial graph can be learned automatically during network training. Notably, the proposed model only has 110K parameters and 0.48MB model size, which is significantly less than most existing methods. Experiments on four widely-used FER datasets demonstrate that the proposed dynamic relationships graph network achieves superior results compared to existing light-weight networks, not just in terms of accuracy but also model size and speed.
Collapse
|
23
|
Xue B, He Y, Jing F, Ren Y, Gao M. Dynamic coarse‐to‐fine ISAR image blind denoising using active joint prior learning. INT J INTELL SYST 2021. [DOI: 10.1002/int.22454] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Bin Xue
- National University of Defense Technology School of Information and Communication Xi'an China
| | - Yi He
- National University of Defense Technology School of Information and Communication Xi'an China
| | - Feng Jing
- National University of Defense Technology School of Information and Communication Xi'an China
| | - Yimeng Ren
- Renmin University of China School of Statistics Beijing China
| | - Mei Gao
- National University of Defense Technology School of Information and Communication Xi'an China
| |
Collapse
|
24
|
Li W, Lu J, Wuerkaixi A, Feng J, Zhou J. Reasoning Graph Networks for Kinship Verification: From Star-Shaped to Hierarchical. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:4947-4961. [PMID: 33961555 DOI: 10.1109/tip.2021.3077111] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this paper, we investigate the problem of facial kinship verification by learning hierarchical reasoning graph networks. Conventional methods usually focus on learning discriminative features for each facial image of a paired sample and neglect how to fuse the obtained two facial image features and reason about the relations between them. To address this, we propose a Star-shaped Reasoning Graph Network (S-RGN). Our S-RGN first constructs a star-shaped graph where each surrounding node encodes the information of comparisons in a feature dimension and the central node is employed as the bridge for the interaction of surrounding nodes. Then we perform relational reasoning on this star graph with iterative message passing. The proposed S-RGN uses only one central node to analyze and process information from all surrounding nodes, which limits its reasoning capacity. We further develop a Hierarchical Reasoning Graph Network (H-RGN) to exploit more powerful and flexible capacity. More specifically, our H-RGN introduces a set of latent reasoning nodes and constructs a hierarchical graph with them. Then bottom-up comparative information abstraction and top-down comprehensive signal propagation are iteratively performed on the hierarchical graph to update the node features. Extensive experimental results on four widely used kinship databases show that the proposed methods achieve very competitive results.
Collapse
|
25
|
Kong Y, Gao S, Yue Y, Hou Z, Shu H, Xie C, Zhang Z, Yuan Y. Spatio-temporal graph convolutional network for diagnosis and treatment response prediction of major depressive disorder from functional connectivity. Hum Brain Mapp 2021; 42:3922-3933. [PMID: 33969930 PMCID: PMC8288094 DOI: 10.1002/hbm.25529] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 04/17/2021] [Accepted: 05/02/2021] [Indexed: 12/14/2022] Open
Abstract
The pathophysiology of major depressive disorder (MDD) has been explored to be highly associated with the dysfunctional integration of brain networks. It is therefore imperative to explore neuroimaging biomarkers to aid diagnosis and treatment. In this study, we developed a spatiotemporal graph convolutional network (STGCN) framework to learn discriminative features from functional connectivity for automatic diagnosis and treatment response prediction of MDD. Briefly, dynamic functional networks were first obtained from the resting-state fMRI with the sliding temporal window method. Secondly, a novel STGCN approach was proposed by introducing the modules of spatial graph attention convolution (SGAC) and temporal fusion. A novel SGAC was proposed to improve the feature learning ability and special anatomy prior guided pooling was developed to enable the feature dimension reduction. A temporal fusion module was proposed to capture the dynamic features of functional connectivity between adjacent sliding windows. Finally, the STGCN proposed approach was utilized to the tasks of diagnosis and antidepressant treatment response prediction for MDD. Performances of the framework were comprehensively examined with large cohorts of clinical data, which demonstrated its effectiveness in classifying MDD patients and predicting the treatment response. The sound performance suggests the potential of the STGCN for the clinical use in diagnosis and treatment prediction.
Collapse
Affiliation(s)
- Youyong Kong
- Lab of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China.,Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China
| | - Shuwen Gao
- Lab of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Yingying Yue
- Department of Psychosomatic and Psychiatry, Zhongda Hospital, School of Medicine, Southeast University, Nanjing, China
| | - Zhenhua Hou
- Department of Psychosomatic and Psychiatry, Zhongda Hospital, School of Medicine, Southeast University, Nanjing, China
| | - Huazhong Shu
- Lab of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China.,Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China
| | - Chunming Xie
- Department of Neurology, Zhongda Hospital, School of Medicine, Southeast University, Nanjing, China
| | - Zhijun Zhang
- Department of Neurology, Zhongda Hospital, School of Medicine, Southeast University, Nanjing, China
| | - Yonggui Yuan
- Department of Psychosomatic and Psychiatry, Zhongda Hospital, School of Medicine, Southeast University, Nanjing, China
| |
Collapse
|
26
|
Prasetyo H, Wicaksono Hari Prayuda A, Hsia CH, Guo JM. Deep Concatenated Residual Networks for Improving Quality of Halftoning-Based BTC Decoded Image. J Imaging 2021; 7:jimaging7020013. [PMID: 34460613 PMCID: PMC8321252 DOI: 10.3390/jimaging7020013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Revised: 01/10/2021] [Accepted: 01/12/2021] [Indexed: 11/16/2022] Open
Abstract
This paper presents a simple technique for improving the quality of the halftoning-based block truncation coding (H-BTC) decoded image. The H-BTC is an image compression technique inspired from typical block truncation coding (BTC). The H-BTC yields a better decoded image compared to that of the classical BTC scheme under human visual observation. However, the impulsive noise commonly appears on the H-BTC decoded image. It induces an unpleasant feeling while one observes this decoded image. Thus, the proposed method presented in this paper aims to suppress the occurring impulsive noise by exploiting a deep learning approach. This process can be regarded as an ill-posed inverse imaging problem, in which the solution candidates of a given problem can be extremely huge and undetermined. The proposed method utilizes the convolutional neural networks (CNN) and residual learning frameworks to solve the aforementioned problem. These frameworks effectively reduce the impulsive noise occurrence, and at the same time, it improves the quality of H-BTC decoded images. The experimental results show the effectiveness of the proposed method in terms of subjective and objective measurements.
Collapse
Affiliation(s)
- Heri Prasetyo
- Department of Informatics, Universitas Sebelas Maret, Surakarta 57126, Indonesia;
- Correspondence: (H.P.); (C.-H.H.)
| | | | - Chih-Hsien Hsia
- Department of Computer Science and Information Engineering, National Ilan University, Yilan 260, Taiwan
- Correspondence: (H.P.); (C.-H.H.)
| | - Jing-Ming Guo
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan;
| |
Collapse
|