1
|
Huang SC, Hoang QV, Jaw DW. Self-Adaptive Feature Transformation Networks for Object Detection in low luminance Images. ACM T INTEL SYST TEC 2022. [DOI: 10.1145/3480973] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Despite the recent improvement of object detection techniques, many of them fail to detect objects in low-luminance images. The blurry and dimmed nature of low-luminance images results in the extraction of vague features and failure to detect objects. In addition, many existing object detection methods are based on models trained on both sufficient- and low-luminance images, which also negatively affect the feature extraction process and detection results. In this article, we propose a framework called Self-adaptive Feature Transformation Network (SFT-Net) to effectively detect objects in low-luminance conditions. The proposed SFT-Net consists of the following three modules: (1) feature transformation module, (2) self-adaptive module, and (3) object detection module. The purpose of the feature transformation module is to enhance the extracted feature through unsupervisely learning a feature domain projection procedure. The self-adaptive module is utilized as a probabilistic module producing appropriate features either from the transformed or the original features to further boost the performance and generalization ability of the proposed framework. Finally, the object detection module is designed to accurately detect objects in both low- and sufficient- luminance images by using the appropriate features produced by the self-adaptive module. The experimental results demonstrate that the proposed SFT-Net framework significantly outperforms the state-of-the-art object detection techniques, achieving an average precision (AP) of up to 6.35 and 11.89 higher on the sufficient- and low- luminance domain, respectively.
Collapse
Affiliation(s)
| | - Quoc-Viet Hoang
- National Taipei University of Technology and Hung Yen University of Technology and Education, Vietnam
| | - Da-Wei Jaw
- National Taiwan University, Taipei, Taiwan
| |
Collapse
|
2
|
Liu H, Wang H, Wu Y, Xing L. Superpixel Region Merging Based on Deep Network for Medical Image Segmentation. ACM T INTEL SYST TEC 2020. [DOI: 10.1145/3386090] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Automatic and accurate semantic segmentation of pathological structures in medical images is challenging because of noisy disturbance, deformable shapes of pathology, and low contrast between soft tissues. Classical superpixel-based classification algorithms suffer from edge leakage due to complexity and heterogeneity inherent in medical images. Therefore, we propose a deep U-Net with superpixel region merging processing incorporated for edge enhancement to facilitate and optimize segmentation. Our approach combines three innovations: (1) different from deep learning--based image segmentation, the segmentation evolved from superpixel region merging via U-Net training getting rich semantic information, in addition to gray similarity; (2) a bilateral filtering module was adopted at the beginning of the network to eliminate external noise and enhance soft tissue contrast at edges of pathogy; and (3) a normalization layer was inserted after the convolutional layer at each feature scale, to prevent overfitting and increase the sensitivity to model parameters. This model was validated on lung CT, brain MR, and coronary CT datasets, respectively. Different superpixel methods and cross validation show the effectiveness of this architecture. The hyperparameter settings were empirically explored to achieve a good trade-off between the performance and efficiency, where a four-layer network achieves the best result in precision, recall, F-measure, and running speed. It was demonstrated that our method outperformed state-of-the-art networks, including FCN-16s, SegNet, PSPNet, DeepLabv3, and traditional U-Net, both quantitatively and qualitatively. Source code for the complete method is available at https://github.com/Leahnawho/Superpixel-network.
Collapse
Affiliation(s)
- Hui Liu
- Shandong University of Finance and Economics and Stanford University, Jinan, Shandong Province, China
| | - Haiou Wang
- Shandong University of Finance and Economics, Jinan, Shandong Province, China
| | - Yan Wu
- Stanford University, CA, USA
| | | |
Collapse
|