1
|
Jiang S, Chen Q, Xiang Y, Pan Y, Wu X, Lin Y. Confounder balancing in adversarial domain adaptation for pre-trained large models fine-tuning. Neural Netw 2024; 173:106173. [PMID: 38387200 DOI: 10.1016/j.neunet.2024.106173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 01/07/2024] [Accepted: 02/08/2024] [Indexed: 02/24/2024]
Abstract
The excellent generalization, contextual learning, and emergence abilities in the pre-trained large models (PLMs) handle specific tasks without direct training data, making them the better foundation models in the adversarial domain adaptation (ADA) methods to transfer knowledge learned from the source domain to target domains. However, existing ADA methods fail to account for the confounder properly, which is the root cause of the source data distribution that differs from the target domains. This study proposes a confounder balancing method in adversarial domain adaptation for PLMs fine-tuning (CadaFT), which includes a PLM as the foundation model for a feature extractor, a domain classifier and a confounder classifier, and they are jointly trained with an adversarial loss. This loss is designed to improve the domain-invariant representation learning by diluting the discrimination in the domain classifier. At the same time, the adversarial loss also balances the confounder distribution among source and unmeasured domains in training. Compared to newest ADA methods, CadaFT can correctly identify confounders in domain-invariant features, thereby eliminating the confounder biases in the extracted features from PLMs. The confounder classifier in CadaFT is designed as a plug-and-play and can be applied in the confounder measurable, unmeasurable, or partially measurable environments. Empirical results on natural language processing and computer vision downstream tasks show that CadaFT outperforms the newest GPT-4, LLaMA2, ViT and ADA methods.
Collapse
Affiliation(s)
- Shuoran Jiang
- Haibin Institute of Technology, ShenZhen, Harbin Institute of Technology campus, Taoyuan Street, Nanshan District, Shenzhen, 518055, GuangDong, China
| | - Qingcai Chen
- Haibin Institute of Technology, ShenZhen, Harbin Institute of Technology campus, Taoyuan Street, Nanshan District, Shenzhen, 518055, GuangDong, China; Peng Cheng Laboratory, No. 2, Xingke 1st Street, Nanshan District, Shenzhen, 518055, Guangdong, China.
| | - Yang Xiang
- Peng Cheng Laboratory, No. 2, Xingke 1st Street, Nanshan District, Shenzhen, 518055, Guangdong, China.
| | - Youcheng Pan
- Peng Cheng Laboratory, No. 2, Xingke 1st Street, Nanshan District, Shenzhen, 518055, Guangdong, China
| | - Xiangping Wu
- Haibin Institute of Technology, ShenZhen, Harbin Institute of Technology campus, Taoyuan Street, Nanshan District, Shenzhen, 518055, GuangDong, China
| | - Yukang Lin
- Haibin Institute of Technology, ShenZhen, Harbin Institute of Technology campus, Taoyuan Street, Nanshan District, Shenzhen, 518055, GuangDong, China
| |
Collapse
|