1
|
Huang J, Zhang X, Jin R, Xu T, Jin Z, Shen M, Lv F, Chen J, Liu J. Wavelet-based selection-and-recalibration network for Parkinson's disease screening in OCT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 256:108368. [PMID: 39154408 DOI: 10.1016/j.cmpb.2024.108368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 07/30/2024] [Accepted: 08/07/2024] [Indexed: 08/20/2024]
Abstract
BACKGROUND AND OBJECTIVE Parkinson's disease (PD) is one of the most prevalent neurodegenerative brain diseases worldwide. Therefore, accurate PD screening is crucial for early clinical intervention and treatment. Recent clinical research indicates that changes in pathology, such as the texture and thickness of the retinal layers, can serve as biomarkers for clinical PD diagnosis based on optical coherence tomography (OCT) images. However, the pathological manifestations of PD in the retinal layers are subtle compared to the more salient lesions associated with retinal diseases. METHODS Inspired by textural edge feature extraction in frequency domain learning, we aim to explore a potential approach to enhance the distinction between the feature distributions in retinal layers of PD cases and healthy controls. In this paper, we introduce a simple yet novel wavelet-based selection and recalibration module to effectively enhance the feature representations of the deep neural network by aggregating the unique clinical properties, such as the retinal layers in each frequency band. We combine this module with the residual block to form a deep network named Wavelet-based Selection and Recalibration Network (WaveSRNet) for automatic PD screening. RESULTS The extensive experiments on a clinical PD-OCT dataset and two publicly available datasets demonstrate that our approach outperforms state-of-the-art methods. Visualization analysis and ablation studies are conducted to enhance the explainability of WaveSRNet in the decision-making process. CONCLUSIONS Our results suggest the potential role of the retina as an assessment tool for PD. Visual analysis shows that PD-related elements include not only certain retinal layers but also the location of the fovea in OCT images.
Collapse
Affiliation(s)
- Jingqi Huang
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Xiaoqing Zhang
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China; Center for High Performance Computing and Shenzhen Key Laboratory of Intelligent Bioinformatics, Shenzhen institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Richu Jin
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Tao Xu
- The State Key Laboratory of Ophthalmology, Optometry and Vision Science, Wenzhou Medical University, Wenzhou, Zhejiang, China; The Oujiang Laboratory; The Affiliated Eye Hospital, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, China
| | - Zi Jin
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Meixiao Shen
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Fan Lv
- The Oujiang Laboratory; The Affiliated Eye Hospital, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Jiangfan Chen
- The State Key Laboratory of Ophthalmology, Optometry and Vision Science, Wenzhou Medical University, Wenzhou, Zhejiang, China; The Oujiang Laboratory; The Affiliated Eye Hospital, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, China
| | - Jiang Liu
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China; The State Key Laboratory of Ophthalmology, Optometry and Vision Science, Wenzhou Medical University, Wenzhou, Zhejiang, China; Singapore Eye Research Institute, 169856, Singapore.
| |
Collapse
|
2
|
Liu Y, Fan K, Zhou W. FPWT: Filter pruning via wavelet transform for CNNs. Neural Netw 2024; 179:106577. [PMID: 39098265 DOI: 10.1016/j.neunet.2024.106577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 07/18/2024] [Accepted: 07/24/2024] [Indexed: 08/06/2024]
Abstract
The enormous data and computational resources required by Convolutional Neural Networks (CNNs) hinder the practical application on mobile devices. To solve this restrictive problem, filter pruning has become one of the practical approaches. At present, most existing pruning methods are currently developed and practiced with respect to the spatial domain, which ignores the potential interconnections in the model structure and the decentralized distribution of image energy in the spatial domain. The image frequency domain transform method can remove the correlation between image pixels and concentrate the image energy distribution, which results in lossy compression of images. In this study, we find that the frequency domain transform method is also applicable to the feature maps of CNNs. The filter pruning via wavelet transform (WT) is proposed in this paper (FPWT), which combines the frequency domain information of WT with the output feature map to more obviously find the correlation between feature maps and make the energy into a relatively concentrated distribution in the frequency domain. Moreover, the importance score of each feature map is calculated by the cosine similarity and the energy-weighted coefficients of the high and low frequency components, and prune the filter based on its importance score. Experiments on two image classification datasets validate the effectiveness of FPWT. For ResNet-110 on CIFAR-10, FPWT reduces FLOPs and parameters by more than 60.0 % with 0.53 % accuracy improvement. For ResNet-50 on ImageNet, FPWT reduces FLOPs by 53.8 % and removes parameters by 49.7 % with only 0.97 % loss of Top-1 accuracy.
Collapse
Affiliation(s)
- Yajun Liu
- School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200444, China
| | - Kefeng Fan
- China Electronics Standardization Institute, Beijing, 100007, China.
| | - Wenju Zhou
- School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200444, China
| |
Collapse
|
3
|
Tan J, Yuan J, Fu X, Bai Y. Colonoscopy polyp classification via enhanced scattering wavelet Convolutional Neural Network. PLoS One 2024; 19:e0302800. [PMID: 39392783 PMCID: PMC11469526 DOI: 10.1371/journal.pone.0302800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Accepted: 08/26/2024] [Indexed: 10/13/2024] Open
Abstract
Among the most common cancers, colorectal cancer (CRC) has a high death rate. The best way to screen for colorectal cancer (CRC) is with a colonoscopy, which has been shown to lower the risk of the disease. As a result, Computer-aided polyp classification technique is applied to identify colorectal cancer. But visually categorizing polyps is difficult since different polyps have different lighting conditions. Different from previous works, this article presents Enhanced Scattering Wavelet Convolutional Neural Network (ESWCNN), a polyp classification technique that combines Convolutional Neural Network (CNN) and Scattering Wavelet Transform (SWT) to improve polyp classification performance. This method concatenates simultaneously learnable image filters and wavelet filters on each input channel. The scattering wavelet filters can extract common spectral features with various scales and orientations, while the learnable filters can capture image spatial features that wavelet filters may miss. A network architecture for ESWCNN is designed based on these principles and trained and tested using colonoscopy datasets (two public datasets and one private dataset). An n-fold cross-validation experiment was conducted for three classes (adenoma, hyperplastic, serrated) achieving a classification accuracy of 96.4%, and 94.8% accuracy in two-class polyp classification (positive and negative). In the three-class classification, correct classification rates of 96.2% for adenomas, 98.71% for hyperplastic polyps, and 97.9% for serrated polyps were achieved. The proposed method in the two-class experiment reached an average sensitivity of 96.7% with 93.1% specificity. Furthermore, we compare the performance of our model with the state-of-the-art general classification models and commonly used CNNs. Six end-to-end models based on CNNs were trained using 2 dataset of video sequences. The experimental results demonstrate that the proposed ESWCNN method can effectively classify polyps with higher accuracy and efficacy compared to the state-of-the-art CNN models. These findings can provide guidance for future research in polyp classification.
Collapse
Affiliation(s)
- Jun Tan
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
- Guangdong Province Key Laboratory of Computational Science, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Jiamin Yuan
- Health construction administration center, Guangdong Provincial Hospital of Chinese Medicine, Guangzhou, Guangdong, China
- The Second Affiliated Hospital of Guangzhou University of Traditional Chinese Medicine(TCM), Guangzhou, Guangdong, China
| | - Xiaoyong Fu
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Yilin Bai
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
- China Southern Airlines, Guangzhou, Guangdong, China
| |
Collapse
|
4
|
Li S, Li T, Sun C, Chen X, Yan R. WPConvNet: An Interpretable Wavelet Packet Kernel-Constrained Convolutional Network for Noise-Robust Fault Diagnosis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:14974-14988. [PMID: 37318968 DOI: 10.1109/tnnls.2023.3282599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Deep learning (DL) has present great diagnostic results in fault diagnosis field. However, the poor interpretability and noise robustness of DL-based methods are still the main factors limiting their wide application in industry. To address these issues, an interpretable wavelet packet kernel-constrained convolutional network (WPConvNet) is proposed for noise-robust fault diagnosis, which combines the feature extraction ability of wavelet bases and the learning ability of convolutional kernels together. First, the wavelet packet convolutional (WPConv) layer is proposed, and constraints are imposed to convolutional kernels, so that each convolution layer is a learnable discrete wavelet transform. Second, a soft threshold activation is proposed to reduce the noise component in feature maps, whose threshold is adaptively learned by estimating the standard deviation of noise. Third, we link the cascaded convolutional structure of convolutional neutral network (CNN) with wavelet packet decomposition and reconstruction using Mallat algorithm, which is interpretable in model architecture. Extensive experiments are carried out on two bearing fault datasets, and the results show that the proposed architecture outperforms other diagnosis models in terms of interpretability and noise robustness.
Collapse
|
5
|
Yeh CH, Lo C, He CH. Multibranch Wavelet-Based Network for Image Demoiréing. SENSORS (BASEL, SWITZERLAND) 2024; 24:2762. [PMID: 38732870 PMCID: PMC11086364 DOI: 10.3390/s24092762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 04/24/2024] [Accepted: 04/24/2024] [Indexed: 05/13/2024]
Abstract
Moiré patterns caused by aliasing between the camera's sensor and the monitor can severely degrade image quality. Image demoiréing is a multi-task image restoration method that includes texture and color restoration. This paper proposes a new multibranch wavelet-based image demoiréing network (MBWDN) for moiré pattern removal. Moiré images are separated into sub-band images using wavelet decomposition, and demoiréing can be achieved using the different learning strategies of two networks: moiré removal network (MRN) and detail-enhanced moiré removal network (DMRN). MRN removes moiré patterns from low-frequency images while preserving the structure of smooth areas. DMRN simultaneously removes high-frequency moiré patterns and enhances fine details in images. Wavelet decomposition is used to replace traditional upsampling, and max pooling effectively increases the receptive field of the network without losing the spatial information. Through decomposing the moiré image into different levels using wavelet transform, the feature learning results of each branch can be fully preserved and fed into the next branch; therefore, possible distortions in the recovered image are avoided. Thanks to the separation of high- and low-frequency images during feature training, the proposed two networks achieve impressive moiré removal effects. Based on extensive experiments conducted using public datasets, the proposed method shows good demoiréing validity both quantitatively and qualitatively when compared with the state-of-the-art approaches.
Collapse
Affiliation(s)
- Chia-Hung Yeh
- Department of Electrical Engineering, National Taiwan Normal University, Taipei 10610, Taiwan; (C.L.)
- Department of Electrical Engineering, National Sun Yat-sen University, Kaohsiung 80424, Taiwan
| | - Chen Lo
- Department of Electrical Engineering, National Taiwan Normal University, Taipei 10610, Taiwan; (C.L.)
| | - Cheng-Han He
- Department of Electrical Engineering, National Taiwan Normal University, Taipei 10610, Taiwan; (C.L.)
| |
Collapse
|
6
|
Wang Y, Cui W, Yu T, Li X, Liao X, Li Y. Dynamic Multi-Graph Convolution-Based Channel-Weighted Transformer Feature Fusion Network for Epileptic Seizure Prediction. IEEE Trans Neural Syst Rehabil Eng 2023; 31:4266-4277. [PMID: 37782584 DOI: 10.1109/tnsre.2023.3321414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Electroencephalogram (EEG) based seizure prediction plays an important role in the closed-loop neuromodulation system. However, most existing seizure prediction methods based on graph convolution network only focused on constructing the static graph, ignoring multi-domain dynamic changes in deep graph structure. Moreover, the existing feature fusion strategies generally concatenated coarse-grained epileptic EEG features directly, leading to the suboptimal seizure prediction performance. To address these issues, we propose a novel multi-branch dynamic multi-graph convolution based channel-weighted transformer feature fusion network (MB-dMGC-CWTFFNet) for the patient-specific seizure prediction with the superior performance. Specifically, a multi-branch (MB) feature extractor is first applied to capture the temporal, spatial and spectral representations fromthe epileptic EEG jointly. Then, we design a point-wise dynamic multi-graph convolution network (dMGCN) to dynamically learn deep graph structures, which can effectively extract high-level features from the multi-domain graph. Finally, by integrating the local and global channel-weighted strategies with the multi-head self-attention mechanism, a channel-weighted transformer feature fusion network (CWTFFNet) is adopted to efficiently fuse the multi-domain graph features. The proposed MB-dMGC-CWTFFNet is evaluated on the public CHB-MIT EEG and a private intracranial sEEG datasets, and the experimental results demonstrate that our proposed method achieves outstanding prediction performance compared with the state-of-the-art methods, indicating an effective tool for patient-specific seizure warning. Our code will be available at: https://github.com/Rockingsnow/MB-dMGC-CWTFFNet.
Collapse
|
7
|
Imtiaz T, Fattah SA, Kung SY. BAWGNet: Boundary aware wavelet guided network for the nuclei segmentation in histopathology images. Comput Biol Med 2023; 165:107378. [PMID: 37678139 DOI: 10.1016/j.compbiomed.2023.107378] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 08/08/2023] [Accepted: 08/14/2023] [Indexed: 09/09/2023]
Abstract
Precise cell nucleus segmentation is very critical in many biologically related analyses and disease diagnoses. However, the variability in nuclei structure, color, and modalities of histopathology images make the automatic computer-aided nuclei segmentation task very difficult. Traditional encoder-decoder based deep learning schemes mainly utilize the spatial domain information that may limit the performance of recognizing small nuclei regions in subsequent downsampling operations. In this paper, a boundary aware wavelet guided network (BAWGNet) is proposed by incorporating a boundary aware unit along with an attention mechanism based on a wavelet domain guidance in each stage of the encoder-decoder output. Here the high-frequency 2 Dimensional discrete wavelet transform (2D-DWT) coefficients are utilized in the attention mechanism to guide the spatial information obtained from the encoder-decoder output stages to leverage the nuclei segmentation task. On the other hand, the boundary aware unit (BAU) captures the nuclei's boundary information, ensuring accurate prediction of the nuclei pixels in the edge region. Furthermore, the preprocessing steps used in our methodology confirm the data's uniformity by converting it to similar color statistics. Extensive experimentations conducted on three benchmark histopathology datasets (DSB, MoNuSeg and TNBC) exhibit the outstanding segmentation performance of the proposed method (with dice scores 90.82%, 85.74%, and 78.57%, respectively). Implementation of the proposed architecture is available at https://github.com/tamjidimtiaz/BAWGNet.
Collapse
Affiliation(s)
- Tamjid Imtiaz
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Shaikh Anowarul Fattah
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka, 1205, Bangladesh.
| | - Sun-Yuan Kung
- Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ, 08544, USA
| |
Collapse
|
8
|
P.S. A, Sahare SA, Gopi VP. ResCoWNet: A deep convolutional neural network with residual learning based on DT-CWT for despeckling Optical Coherence Tomography images. OPTIK 2023; 284:170924. [DOI: 10.1016/j.ijleo.2023.170924] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2025]
|
9
|
Zhao Y, Wang S, Zhang Y, Qiao S, Zhang M. WRANet: wavelet integrated residual attention U-Net network for medical image segmentation. COMPLEX INTELL SYST 2023:1-13. [PMID: 37361970 PMCID: PMC10248349 DOI: 10.1007/s40747-023-01119-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 05/16/2023] [Indexed: 06/28/2023]
Abstract
Medical image segmentation is crucial for the diagnosis and analysis of disease. Deep convolutional neural network methods have achieved great success in medical image segmentation. However, they are highly susceptible to noise interference during the propagation of the network, where weak noise can dramatically alter the network output. As the network deepens, it can face problems such as gradient explosion and vanishing. To improve the robustness and segmentation performance of the network, we propose a wavelet residual attention network (WRANet) for medical image segmentation. We replace the standard downsampling modules (e.g., maximum pooling and average pooling) in CNNs with discrete wavelet transform, decompose the features into low- and high-frequency components, and remove the high-frequency components to eliminate noise. At the same time, the problem of feature loss can be effectively addressed by introducing an attention mechanism. The combined experimental results show that our method can effectively perform aneurysm segmentation, achieving a Dice score of 78.99%, an IoU score of 68.96%, a precision of 85.21%, and a sensitivity score of 80.98%. In polyp segmentation, a Dice score of 88.89%, an IoU score of 81.74%, a precision rate of 91.32%, and a sensitivity score of 91.07% were achieved. Furthermore, our comparison with state-of-the-art techniques demonstrates the competitiveness of the WRANet network.
Collapse
Affiliation(s)
- Yawu Zhao
- School of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong China
| | - Shudong Wang
- School of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong China
| | - Yulin Zhang
- College of Mathematics and System Science, Shandong University of Science and Technology, Qingdao, Shandong China
| | - Sibo Qiao
- School of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong China
| | - Mufei Zhang
- Inspur Cloud Information Technology Co, Inspur, Jinan, Shandong China
| |
Collapse
|
10
|
Jiang S, Xu Y, Li D, Fan R. Multi-scale fusion for RGB-D indoor semantic segmentation. Sci Rep 2022; 12:20305. [PMID: 36434023 PMCID: PMC9700838 DOI: 10.1038/s41598-022-24836-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 11/21/2022] [Indexed: 11/26/2022] Open
Abstract
In computer vision, convolution and pooling operations tend to lose high-frequency information, and the contour details will also disappear with the deepening of the network, especially in image semantic segmentation. For RGB-D image semantic segmentation, all the effective information of RGB and depth image can not be used effectively, while the form of wavelet transform can retain the low and high frequency information of the original image perfectly. In order to solve the information losing problems, we proposed an RGB-D indoor semantic segmentation network based on multi-scale fusion: designed a wavelet transform fusion module to retain contour details, a nonsubsampled contourlet transform to replace the pooling operation, and a multiple pyramid module to aggregate multi-scale information and context global information. The proposed method can retain the characteristics of multi-scale information with the help of wavelet transform, and make full use of the complementarity of high and low frequency information. As the depth of the convolutional neural network increases without losing the multi-frequency characteristics, the segmentation accuracy of image edge contour details is also improved. We evaluated our proposed efficient method on commonly used indoor datasets NYUv2 and SUNRGB-D, and the results showed that we achieved state-of-the-art performance and real-time inference.
Collapse
Affiliation(s)
- Shiyi Jiang
- College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Yang Xu
- College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China.
- Guiyang Aluminum-magnesium Design and Research Institute Co., Ltd., Guiyang, 550009, China.
| | - Danyang Li
- College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Runze Fan
- College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| |
Collapse
|
11
|
Aliasing and adversarial robust generalization of CNNs. Mach Learn 2022. [DOI: 10.1007/s10994-022-06222-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Abstract
AbstractMany commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. To reveal model weaknesses, adversarial attacks are specifically optimized to generate small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained by using adversarial examples during training, which in most cases reduces the measurable model attackability. Unfortunately, this technique can lead to robust overfitting, which results in non-robust models. In this paper, we analyze adversarially trained, robust models in the context of a specific network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from downsampling artifacts, aka. aliasing, than baseline models. In the case of robust overfitting, we observe a strong increase in aliasing and propose a novel early stopping approach based on the measurement of aliasing.
Collapse
|