Wang H, Li W. Fast ramp fraction loss SVM classifier with low computational complexity for pattern classification.
Neural Netw 2025;
184:107087. [PMID:
39742534 DOI:
10.1016/j.neunet.2024.107087]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2024] [Revised: 12/04/2024] [Accepted: 12/20/2024] [Indexed: 01/03/2025]
Abstract
The support vector machine (SVM) is a powerful tool for pattern classification thanks to its outstanding efficiency. However, when encountering extensive classification tasks, the considerable computational complexity may present a substantial barrier. To reduce computational complexity, the novel ramp fraction loss SVM model called Lrf-SVM is introduced. The aim of this model is to simultaneously achieve sparsity and robustness. Utilizing our proposed proximal stationary point, we develop a novel optimality theory to the nonsmooth and nonconvex Lrf-SVM. Drawing upon this innovative theory, a novel efficient alternating direction method of multipliers (ADMM) incorporating a working set with low computational complexity will be introduced for handing Lrf-SVM. Moreover, our algorithm has shown that it can achieve global convergence. Our algorithm has been shown to be highly efficient through numerical experiments, surpassing nine other top solvers regarding number of support vectors, speed of computation, accuracy of classification and robust to outliers. For example, for addressing the real dataset over 107 samples, our algorithm can finish the classification in just 18.67 s, a notable enhancement in comparison to other algorithms that need at least 605.3 s.
Collapse