Research article

Accelerated least squares twin support vector machine with $ L_1 $-norm regularization by ADMM

  • Received: 26 February 2025 Revised: 23 April 2025 Accepted: 29 April 2025 Published: 06 May 2025
  • MSC : 65K05, 90C20

  • This paper introduces a modified least squares twin support vector machine (LSTSVM) designed to enhance classification accuracy and robustness in the presence of outliers and noisy datasets. Building on the traditional twin SVM (TWSVM) and LSTSVM frameworks, we propose replacing the $ L_2 $-norm of error variables with the $ L_1 $-norm to mitigate the influence of extreme values and improve sparsity in the solution. To address the computational challenges of large-scale datasets, we employ the alternating direction method of multipliers (ADMM) to efficiently decompose the optimization problem into smaller subproblems, ensuring scalability and reduced computational costs. Acceleration steps with a guard condition are also integrated to speed up convergence. Experimental evaluations demonstrate the proposed method's superior performance in terms of computational efficiency and classification accuracy compared to TWSVM and traditional LSTSVM, making it a promising solution for real-world applications in classification tasks involving noisy or imbalanced data.

    Citation: Rujira Fongmun, Thanasak Mouktonglang. Accelerated least squares twin support vector machine with $ L_1 $-norm regularization by ADMM[J]. AIMS Mathematics, 2025, 10(5): 10413-10430. doi: 10.3934/math.2025474

    Related Papers:

  • This paper introduces a modified least squares twin support vector machine (LSTSVM) designed to enhance classification accuracy and robustness in the presence of outliers and noisy datasets. Building on the traditional twin SVM (TWSVM) and LSTSVM frameworks, we propose replacing the $ L_2 $-norm of error variables with the $ L_1 $-norm to mitigate the influence of extreme values and improve sparsity in the solution. To address the computational challenges of large-scale datasets, we employ the alternating direction method of multipliers (ADMM) to efficiently decompose the optimization problem into smaller subproblems, ensuring scalability and reduced computational costs. Acceleration steps with a guard condition are also integrated to speed up convergence. Experimental evaluations demonstrate the proposed method's superior performance in terms of computational efficiency and classification accuracy compared to TWSVM and traditional LSTSVM, making it a promising solution for real-world applications in classification tasks involving noisy or imbalanced data.



    加载中


    [1] C. J. Burges, A tutorial on support vector machines for pattern recognition, Data Min. Knowl. Disc., 2 (1998), 121–167. https://doi.org/10.1023/A:1009715923555 doi: 10.1023/A:1009715923555
    [2] O. L. Mangasarian, E. W. Wild, Multisurface proximal support vector machine classification via generalized eigenvalues, IEEE Trans. Pattern Anal. Mach. Intell., 28 (2005), 69–74. https://doi.org/10.1109/TPAMI.2006.17 doi: 10.1109/TPAMI.2006.17
    [3] Jayadeva, R. Khemchandani, S. Chandra, Twin support vector machines for pattern classification, IEEE Trans. Pattern Anal. Mach. Intell., 29 (2007), 905–910. https://doi.org/10.1109/TPAMI.2007.1068 doi: 10.1109/TPAMI.2007.1068
    [4] P. M. Murphy, D. W. Aha, UCI repository of machine learning databases, University of California, 1992.
    [5] M. A. Kumar, M. Gopal, Least squares twin support vector machines for pattern classification, Expert Syst. Appl., 36 (2009), 7535–7543. https://doi.org/10.1016/j.eswa.2008.09.066 doi: 10.1016/j.eswa.2008.09.066
    [6] J. A. Suykens, J. Vandewalle, Least squares support vector machine classifiers, Neural Process. Lett., 9 (1999), 293–300. https://doi.org/10.1023/A:1018628609742 doi: 10.1023/A:1018628609742
    [7] S. Gao, Q. Ye, N. Ye, 1-norm least squares twin support vector machines, Neurocomputing, 74 (2011), 3590–3597. https://doi.org/10.1016/j.neucom.2011.06.015 doi: 10.1016/j.neucom.2011.06.015
    [8] H. Yan, Q. Ye, T. A. Zhang, D. J. Yu, X. Yuan, Y. Xu, et al., Least squares twin bounded support vector machines based on L1-norm distance metric for classification, Pattern Recognit., 74 (2018), 434–447. https://doi.org/10.1016/j.patcog.2017.09.035 doi: 10.1016/j.patcog.2017.09.035
    [9] C. Wang, Q. Ye, P. Luo, N. Ye, L. Fu, Robust capped L1-norm twin support vector machine, Neural Networks, 114 (2019), 47–59. https://doi.org/10.1016/j.neunet.2019.01.016 doi: 10.1016/j.neunet.2019.01.016
    [10] L. Yang, Y. Wang, G. Li, Robust capped L1-norm projection twin support vector machine, J. Ind. Manage. Optim., 19 (2023), 5797–5815. https://doi.org/10.3934/jimo.2022195 doi: 10.3934/jimo.2022195
    [11] R. Glowinski, A. Marroco, Sur l'approximation, par éléments finis d'ordre un, et la résolution, par pénalisation-dualité d'une classe de problémes de Dirichlet non linéaires, Anal. Numer., 9 (1975), 41–76.
    [12] D. Gabay, B. Mercier, A dual algorithm for the solution of nonlinear variational problems via finite element approximation, Comput. Math. Appl., 2 (1976), 17–40. https://doi.org/10.1016/0898-1221(76)90003-1 doi: 10.1016/0898-1221(76)90003-1
    [13] C. Chen, R. H. Chan, S. Ma, J. Yang, Inertial proximal ADMM for linearly constrained separable convex optimization, SIAM J. Imag. Sci., 8 (2015), 2239–2267. https://doi.org/10.1137/15100463X doi: 10.1137/15100463X
    [14] K. R. Müller, S. Mika, K. Tsuda, B. Schölkopf, An introduction to kernel-based learning algorithms, IEEE Trans. Neural Networks, 12 (2001), 181–201. https://doi.org/10.1109/72.914517 doi: 10.1109/72.914517
    [15] A. Buccini, P. Dell'Acqua, M. Donatelli, A general framework for ADMM acceleration, Numer. Algorithms, 85 (2020), 829–848. https://doi.org/10.1007/s11075-019-00839-y doi: 10.1007/s11075-019-00839-y
    [16] E. Makhlouf, Linearly inseparable dataset, 2023. Available from: https://www.kaggle.com/datasets/emadmakhlouf/linearly-inseperable-dataset.
    [17] M. Harries, J. Gama, A. Bifet, Electricity dataset, 2009. Available from: https://www.openml.org/d/151.
    [18] F. M. Dekking, A modern introduction to probability and statistics, Springer Science & Business Media, 2005. https://doi.org/10.1007/1-84628-168-7
    [19] O. L. Mangasarian, D. R. Musicant, Lagrangian support vector machines, J. Mach. Learn. Res., 1 (2001), 161–177.
    [20] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, IEEE Xplore, 2011. https://doi.org/10.1561/2200000016
    [21] T. Goldstein, B. O'Donoghue, S. Setzer, R. Baraniuk, Fast alternating direction optimization methods, SIAM J. Imag. Sci., 7 (2014), 1588–1623. https://doi.org/10.1137/120896219 doi: 10.1137/120896219
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1088) PDF downloads(69) Cited by(0)

Article outline

Figures and Tables

Figures(1)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog