1
|
Liu J, Zhang W, Liu F, Yang J, Xiao L. Deep one-class probability learning for end-to-end image classification. Neural Netw 2025; 185:107201. [PMID: 39903959 DOI: 10.1016/j.neunet.2025.107201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Revised: 12/29/2024] [Accepted: 01/19/2025] [Indexed: 02/06/2025]
Abstract
One-class learning has many application potentials in novelty, anomaly, and outlier detection systems. It aims to distinguish both positive and negative samples with a model trained via only positive samples or one-class annotated samples. With the difficulty in training an end-to-end classification network, existing methods usually make decisions indirectly. To fully exploit the learning capability of a deep network, in this paper, we propose to design a deep end-to-end binary image classifier based on convolutional neural network with input of image and output of classification result. Without negative training samples, we establish a probabilistic model driven by an energy to learn the distribution of positive samples. The energy is proposed based on the output of the network which subtly models the deep discriminations into statistics. During optimization, to overcome the difficulty of distribution estimation, we propose a novel particle swarm optimization algorithm based sampling method. Compared with existing methods, the proposed method is able to directly output classification results without additional thresholding or estimating operations. Moreover, the deep network is directly optimized via the probabilistic model which results in better adaptation of positive distribution and classification task. Experiments demonstrate the effectiveness and state-of-the-art performance of the proposed method.
Collapse
Affiliation(s)
- Jia Liu
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China
| | - Wenhua Zhang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
| | - Fang Liu
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China
| | - Jingxiang Yang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China
| | - Liang Xiao
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China
| |
Collapse
|
2
|
Sun Y, Lei L, Guan D, Kuang G, Li Z, Liu L. Locality Preservation for Unsupervised Multimodal Change Detection in Remote Sensing Imagery. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:6955-6969. [PMID: 38809739 DOI: 10.1109/tnnls.2024.3401696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2024]
Abstract
Multimodal change detection (MCD) is a topic of increasing interest in remote sensing. Due to different imaging mechanisms, the multimodal images cannot be directly compared to detect the changes. In this article, we explore the topological structure of multimodal images and construct the links between class relationships (same/different) and change labels (changed/unchanged) of pairwise superpixels, which are imaging modality-invariant. With these links, we formulate the MCD problem within a mathematical framework termed the locality-preserving energy model (LPEM), which is used to maintain the local consistency constraints embedded in the links: the structure consistency based on feature similarity and the label consistency based on spatial continuity. Because the foundation of LPEM, i.e., the links, is intuitively explainable and universal, the proposed method is very robust across different MCD situations. Noteworthy, LPEM is built directly on the label of each superpixel, so it is a paradigm that outputs the change map (CM) directly without the need to generate intermediate difference image (DI) as most previous algorithms have done. Experiments on different real datasets demonstrate the effectiveness of the proposed method. Source code of the proposed method is made available at https://github.com/yulisun/LPEM.
Collapse
|
3
|
Qu J, Dong W, Yang Y, Zhang T, Li Y, Du Q. Cycle-Refined Multidecision Joint Alignment Network for Unsupervised Domain Adaptive Hyperspectral Change Detection. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:2634-2647. [PMID: 38170657 DOI: 10.1109/tnnls.2023.3347301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
Hyperspectral change detection, which provides abundant information on land cover changes in the Earth's surface, has become one of the most crucial tasks in remote sensing. Recently, deep-learning-based change detection methods have shown remarkable performance, but the acquirement of labeled data is extremely expensive and time-consuming. It is intuitive to learn changes from the scene with sufficient labeled data and adapting them into an unlabeled new scene. However, the nonnegligible domain shift between different scenes leads to inevitable performance degradation. In this article, a cycle-refined multidecision joint alignment network (CMJAN) is proposed for unsupervised domain adaptive hyperspectral change detection, which realizes progressive alignment of the data distributions between the source and target domains with cycle-refined high-confidence labeled samples. There are two key characteristics: 1) progressively mitigate the distribution discrepancy to learn domain-invariant difference feature representation and 2) update the high-confidence training samples of the target domain in a cycle manner. The benefit is that the domain shift between the source and target domains is progressively alleviated to promote change detection performance on the target domain in an unsupervised manner. Experimental results on different datasets demonstrate that the proposed method can achieve better performance than the state-of-the-art change detection methods.
Collapse
|
4
|
Zhang M, Gao T, Gong M, Zhu S, Wu Y, Li H. Semisupervised Change Detection Based on Bihierarchical Feature Aggregation and Extraction Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:10488-10502. [PMID: 37022855 DOI: 10.1109/tnnls.2023.3242075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
With the rapid development of remote sensing (RS) technology, high-resolution RS image change detection (CD) has been widely used in many applications. Pixel-based CD techniques are maneuverable and widely used, but vulnerable to noise interference. Object-based CD techniques can effectively utilize the abundant spectrum, texture, shape, and spatial information but easy-to-ignore details of RS images. How to combine the advantages of pixel-based methods and object-based methods remains a challenging problem. Besides, although supervised methods have the capability to learn from data, the true labels representing changed information of RS images are often hard to obtain. To address these issues, this article proposes a novel semisupervised CD framework for high-resolution RS images, which employs small amounts of true labeled data and a lot of unlabeled data to train the CD network. A bihierarchical feature aggregation and extraction network (BFAEN) is designed to achieve the pixelwise together with objectwise feature concatenation feature representation for the comprehensive utilization of the two-level features. In order to alleviate the coarseness and insufficiency of labeled samples, a confident learning algorithm is used to eliminate noisy labels and a novel loss function is designed for training the model using true- and pseudo-labels in a semisupervised fashion. Experimental results on real datasets demonstrate the effectiveness and superiority of the proposed method.
Collapse
|
5
|
Wu C, Chen H, Du B, Zhang L. Unsupervised Change Detection in Multitemporal VHR Images Based on Deep Kernel PCA Convolutional Mapping Network. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:12084-12098. [PMID: 34236977 DOI: 10.1109/tcyb.2021.3086884] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
With the development of Earth observation technology, a very-high-resolution (VHR) image has become an important data source of change detection (CD). These days, deep learning (DL) methods have achieved conspicuous performance in the CD of VHR images. Nonetheless, most of the existing CD models based on DL require annotated training samples. In this article, a novel unsupervised model, called kernel principal component analysis (KPCA) convolution, is proposed for extracting representative features from multitemporal VHR images. Based on the KPCA convolution, an unsupervised deep siamese KPCA convolutional mapping network (KPCA-MNet) is designed for binary and multiclass CD. In the KPCA-MNet, the high-level spatial-spectral feature maps are extracted by a deep siamese network consisting of weight-shared KPCA convolutional layers. Then, the change information in the feature difference map is mapped into a 2-D polar domain. Finally, the CD results are generated by threshold segmentation and clustering algorithms. All procedures of KPCA-MNet do not require labeled data. The theoretical analysis and experimental results in two binary CD datasets and one multiclass CD datasets demonstrate the validity, robustness, and potential of the proposed method.
Collapse
|
6
|
Biophysical Model: A Promising Method in the Study of the Mechanism of Propofol: A Narrative Review. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8202869. [PMID: 35619772 PMCID: PMC9129930 DOI: 10.1155/2022/8202869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 04/02/2022] [Accepted: 04/19/2022] [Indexed: 11/17/2022]
Abstract
The physiological and neuroregulatory mechanism of propofol is largely based on very limited knowledge. It is one of the important puzzling issues in anesthesiology and is of great value in both scientific and clinical fields. It is acknowledged that neural networks which are comprised of a number of neural circuits might be involved in the anesthetic mechanism. However, the mechanism of this hypothesis needs to be further elucidated. With the progress of artificial intelligence, it is more likely to solve this problem through using artificial neural networks to perform temporal waveform data analysis and to construct biophysical computational models. This review focuses on current knowledge regarding the anesthetic mechanism of propofol, an intravenous general anesthetic, by constructing biophysical computational models.
Collapse
|
7
|
Zhang W, Jiao L, Liu F, Yang S, Liu J. Adaptive Contourlet Fusion Clustering for SAR Image Change Detection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:2295-2308. [PMID: 35245194 DOI: 10.1109/tip.2022.3154922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In this paper, a novel unsupervised change detection method called adaptive Contourlet fusion clustering based on adaptive Contourlet fusion and fast non-local clustering is proposed for multi-temporal synthetic aperture radar (SAR) images. A binary image indicating changed regions is generated by a novel fuzzy clustering algorithm from a Contourlet fused difference image. Contourlet fusion uses complementary information from different types of difference images. For unchanged regions, the details should be restrained while highlighted for changed regions. Different fusion rules are designed for low frequency band and high frequency directional bands of Contourlet coefficients. Then a fast non-local clustering algorithm (FNLC) is proposed to classify the fused image to generate changed and unchanged regions. In order to reduce the impact of noise while preserve details of changed regions, not only local but also non-local information are incorporated into the FNLC in a fuzzy way. Experiments on both small and large scale datasets demonstrate the state-of-the-art performance of the proposed method in real applications.
Collapse
|
8
|
Deep Siamese Networks Based Change Detection with Remote Sensing Images. REMOTE SENSING 2021. [DOI: 10.3390/rs13173394] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Although considerable success has been achieved in change detection on optical remote sensing images, accurate detection of specific changes is still challenging. Due to the diversity and complexity of the ground surface changes and the increasing demand for detecting changes that require high-level semantics, we have to resort to deep learning techniques to extract the intrinsic representations of changed areas. However, one key problem for developing deep learning metho for detecting specific change areas is the limitation of annotated data. In this paper, we collect a change detection dataset with 862 labeled image pairs, where the urban construction-related changes are labeled. Further, we propose a supervised change detection method based on a deep siamese semantic segmentation network to handle the proposed data effectively. The novelty of the method is that the proposed siamese network treats the change detection problem as a binary semantic segmentation task and learns to extract features from the image pairs directly. The siamese architecture as well as the elaborately designed semantic segmentation networks significantly improve the performance on change detection tasks. Experimental results demonstrate the promising performance of the proposed network compared to existing approaches.
Collapse
|
9
|
An Airport Knowledge-Based Method for Accurate Change Analysis of Airport Runways in VHR Remote Sensing Images. REMOTE SENSING 2020. [DOI: 10.3390/rs12193163] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Due to the complexity of airport background and runway structure, the performances of most runway extraction methods are limited. Furthermore, at present, the military fields attach greater importance to semantic changes of some objects in the airport, but few studies have been done on this subject. To address these issues, this paper proposes an accurate runway change analysis method, which comprises two stages: airport runway extraction and runway change analysis. For the former stage, some airport knowledge, such as chevron markings and runway edge markings, are first applied in combination with multiple features of runways to improve the accuracy. In addition, the proposed method can accomplish airport runway extraction automatically. For the latter, semantic information and vector results of runway changes can be obtained simultaneously by comparing bi-temporal runway extraction results. In six test images with about 0.5-m spatial resolution, the average completeness of runway extraction is nearly 100%, and the average quality is nearly 89%. In addition, the final experiment using two sets of bi-temporal very high-resolution (VHR) images of runway changes demonstrated that semantic results obtained by our method are consistent with the real situation and the final accuracy is over 80%. Overall, the airport knowledge, especially chevron markings for runways and runway edge markings, are critical to runway recognition/detection, and multiple features of runways, such as shape and parallel line features, can further improve the completeness and accuracy of runway extraction. Finally, a small step has been taken in the study of runway semantic changes, which cannot be accomplished by change detection alone.
Collapse
|
10
|
A Feature Space Constraint-Based Method for Change Detection in Heterogeneous Images. REMOTE SENSING 2020. [DOI: 10.3390/rs12183057] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
With the development of remote sensing technologies, change detection in heterogeneous images becomes much more necessary and significant. The main difficulty lies in how to make input heterogeneous images comparable so that the changes can be detected. In this paper, we propose an end-to-end heterogeneous change detection method based on the feature space constraint. First, considering that the input heterogeneous images are in two distinct feature spaces, two encoders with the same structure are used to extract features, respectively. A decoder is used to obtain the change map from the extracted features. Then, the Gram matrices, which include the correlations between features, are calculated to represent different feature spaces, respectively. The squared Euclidean distance between Gram matrices, termed as feature space loss, is used to constrain the extracted features. After that, a combined loss function consisting of the binary cross entropy loss and feature space loss is designed for training the model. Finally, the change detection results between heterogeneous images can be obtained when the model is trained well. The proposed method can constrain the features of two heterogeneous images to the same feature space while keeping their unique features so that the comparability between features can be enhanced and better detection results can be achieved. Experiments on two heterogeneous image datasets consisting of optical and SAR images demonstrate the effectiveness and superiority of the proposed method.
Collapse
|