Processing math: 100%
Research article Special Issues

2-tuple linguistic Fermatean fuzzy MAGDM based on the WASPAS method for selection of solid waste disposal location

  • Received: 19 August 2022 Revised: 22 September 2022 Accepted: 18 October 2022 Published: 12 December 2022
  • Manufacturing plants generate toxic waste that can be harmful to workers, the population and the atmosphere. Solid waste disposal location selection (SWDLS) for manufacturing plants is one of the fastest growing challenges in many countries. The weighted aggregated sum product assessment (WASPAS) is a unique combination of the weighted sum model and the weighted product model. The purpose of this research paper is to introduce a WASPAS method with a 2-tuple linguistic Fermatean fuzzy (2TLFF) set for the SWDLS problem by using the Hamacher aggregation operators. As it is based on simple and sound mathematics, being quite comprehensive in nature, it can be successfully applied to any decision-making problem. First, we briefly introduce the definition, operational laws and some aggregation operators of 2-tuple linguistic Fermatean fuzzy numbers. Thereafter, we extend the WASPAS model to the 2TLFF environment to build the 2TLFF-WASPAS model. Then, the calculation steps for the proposed WASPAS model are presented in a simplified form. Our proposed method, which is more reasonable and scientific in terms of considering the subjectivity of the decision maker's behaviors and the dominance of each alternative over others. Finally, a numerical example for SWDLS is proposed to illustrate the new method, and some comparisons are also conducted to further illustrate the advantages of the new method. The analysis shows that the results of the proposed method are stable and consistent with the results of some existing methods.

    Citation: Muhammad Akram, Usman Ali, Gustavo Santos-García, Zohra Niaz. 2-tuple linguistic Fermatean fuzzy MAGDM based on the WASPAS method for selection of solid waste disposal location[J]. Mathematical Biosciences and Engineering, 2023, 20(2): 3811-3837. doi: 10.3934/mbe.2023179

    Related Papers:

    [1] Siting Yu, Jingjing Peng, Zengao Tang, Zhenyun Peng . Iterative methods to solve the constrained Sylvester equation. AIMS Mathematics, 2023, 8(9): 21531-21553. doi: 10.3934/math.20231097
    [2] Min Wang, Jiao Teng, Lei Wang, Junmei Wu . Application of ADMM to robust model predictive control problems for the turbofan aero-engine with external disturbances. AIMS Mathematics, 2022, 7(6): 10759-10777. doi: 10.3934/math.2022601
    [3] Bao Ma, Yanrong Ma, Jun Ma . Adaptive robust AdaBoost-based kernel-free quadratic surface support vector machine with Universum data. AIMS Mathematics, 2025, 10(4): 8036-8065. doi: 10.3934/math.2025369
    [4] Bing Xue, Jiakang Du, Hongchun Sun, Yiju Wang . A linearly convergent proximal ADMM with new iterative format for BPDN in compressed sensing problem. AIMS Mathematics, 2022, 7(6): 10513-10533. doi: 10.3934/math.2022586
    [5] Yunfeng Shi, Shu Lv, Kaibo Shi . A new parallel data geometry analysis algorithm to select training data for support vector machine. AIMS Mathematics, 2021, 6(12): 13931-13953. doi: 10.3934/math.2021806
    [6] Yue Zhao, Meixia Li, Xiaowei Pan, Jingjing Tan . Partial symmetric regularized alternating direction method of multipliers for non-convex split feasibility problems. AIMS Mathematics, 2025, 10(2): 3041-3061. doi: 10.3934/math.2025142
    [7] Kanjanaporn Tansri, Pattrawut Chansangiam . Gradient-descent iterative algorithm for solving exact and weighted least-squares solutions of rectangular linear systems. AIMS Mathematics, 2023, 8(5): 11781-11798. doi: 10.3934/math.2023596
    [8] Fengxia Zhang, Ying Li, Jianli Zhao . The semi-tensor product method for special least squares solutions of the complex generalized Sylvester matrix equation. AIMS Mathematics, 2023, 8(3): 5200-5215. doi: 10.3934/math.2023261
    [9] Senyuan Yang, Bo Yu, Jianxin Pan, Wuliang Yin, Hua Wang, Kai Yang, Qingtai Xiao . Application of a hybrid nonlinear algorithm driven by machine learning and feature importance identification for temperature control prediction of the bath smelting process. AIMS Mathematics, 2025, 10(6): 13104-13129. doi: 10.3934/math.2025588
    [10] Andrew Calcan, Scott B. Lindstrom . The ADMM algorithm for audio signal recovery and performance modification with the dual Douglas-Rachford dynamical system. AIMS Mathematics, 2024, 9(6): 14640-14657. doi: 10.3934/math.2024712
  • Manufacturing plants generate toxic waste that can be harmful to workers, the population and the atmosphere. Solid waste disposal location selection (SWDLS) for manufacturing plants is one of the fastest growing challenges in many countries. The weighted aggregated sum product assessment (WASPAS) is a unique combination of the weighted sum model and the weighted product model. The purpose of this research paper is to introduce a WASPAS method with a 2-tuple linguistic Fermatean fuzzy (2TLFF) set for the SWDLS problem by using the Hamacher aggregation operators. As it is based on simple and sound mathematics, being quite comprehensive in nature, it can be successfully applied to any decision-making problem. First, we briefly introduce the definition, operational laws and some aggregation operators of 2-tuple linguistic Fermatean fuzzy numbers. Thereafter, we extend the WASPAS model to the 2TLFF environment to build the 2TLFF-WASPAS model. Then, the calculation steps for the proposed WASPAS model are presented in a simplified form. Our proposed method, which is more reasonable and scientific in terms of considering the subjectivity of the decision maker's behaviors and the dominance of each alternative over others. Finally, a numerical example for SWDLS is proposed to illustrate the new method, and some comparisons are also conducted to further illustrate the advantages of the new method. The analysis shows that the results of the proposed method are stable and consistent with the results of some existing methods.



    Support vector machines (SVMs) are robust computational tools for supervised learning, commonly employed in classification and regression tasks. With foundations in statistical learning theory and Bayesian principles, SVMs aim to identify an optimal separating hyperplane that maximizes the margin between positive and negative examples [1]. This approach has proven effective in diverse applications such as particle identification, face recognition, text categorization, and bioinformatics.

    Expanding on the conventional SVM framework, Mangasarian and Wild [2] introduced the generalized eigenvalue proximal support vector machine (GEPSVM), which addresses binary classification by identifying two distinct hyperplanes, one for each category. The closest hyperplane is assigned to data points based on proximity, transforming the problem into a generalized eigenvalue formulation. The solutions are obtained using eigenvectors associated with the smallest eigenvalues.

    To further enhance classification efficiency, Jayadeva et al. [3] proposed the twin support vector machine (TWSVM), which also constructs two non-parallel hyperplanes. Unlike GEPSVM, TWSVM involves solving two smaller quadratic programming problems (QPPs) instead of a single large-scale QPP. This design reduces computational complexity, as shown in experimental evaluations that demonstrate TWSVM's superior performance over GEPSVM and standard SVMs on datasets from the University of California, Irvine (UCI) machine learning repository [4].

    Focusing on computational simplicity and scalability, Kumar and Gopal [5] developed the least squares twin support vector machine (LSTSVM) as an extension of TWSVM [6]. LSTSVM reformulates the primal QPPs of TWSVM using least squares principles, replacing inequality constraints with equality constraints. Consequently, the optimization reduces to solving two systems of linear equations, bypassing the need for external optimizers. This approach effectively accommodates nonlinear kernels while maintaining computational efficiency. Empirical comparisons across various UCI and artificial datasets confirm LSTSVM's faster training time and competitive classification accuracy relative to TWSVM and traditional LSTSVM.

    Despite these advantages, traditional LSTSVM minimizes the L2-norm of error variables, making it sensitive to outliers. This sensitivity can degrade classification accuracy, especially in noisy or imbalanced datasets. To address this, several enhancements have been proposed to improve the robustness of LSTSVM and related models. Furthermore, the standard LSTSVM reduces the coefficients of irrelevant features without eliminating any of them entirely. As a result, if the dataset contains many irrelevant features, using the standard LSTSVM may lead to a complex model with numerous included features since none of the irrelevant coefficients are reduced to zero.

    Gao et al. [7] introduced the L1-norm least squares twin support vector machine, which replaces the L2-norm with the L1-norm in the objective function to promote robustness and handle outliers effectively. This reformulation also promotes sparsity and feature suppression. By converting the constrained problem into an unconstrained convex quadratic form, they solve it efficiently using a generalized Newton method.

    Yan et al. [8] proposed the L1-norm-based least squares twin bounded support vector machine, replacing all conventional L2-norms with L1-norms to reduce outlier influence. The optimization problems are addressed through an iterative reweighting technique.

    Wang et al. [9] introduced the robust capped L1-norm twin support vector machine with privileged information, which incorporates the learning using privileged information framework. The capped L1-norm enhances robustness, while upper and lower bound constraints on both main and privileged features control noise sensitivity. An alternating minimization approach is used to solve the optimization problems.

    In a related line of work, the robust capped L1-norm projection twin support vector machine (CPTSVM) was proposed to improve the outlier resistance of PTSVM models by Yang et al. [10]. By replacing the squared L2-norm with the capped L1-norm, the CPTSVM formulation increases classifier robustness in the presence of noise and outliers. Though the resulting problems are non-convex and non-smooth, an iterative algorithm with proven convergence is employed to solve them. Experiments on artificial and benchmark datasets demonstrate the model's robustness and effectiveness.

    These related works inform the design of our proposed method, which integrates L1-norm regularization into the LSTSVM framework and leverages the alternating direction method of multipliers (ADMM) [11,12,13] for scalable optimization. In contrast to prior methods, our approach introduces acceleration mechanisms and guard conditions to ensure both robustness and fast convergence on large and noisy datasets.

    This section briefly reviews the fundamental concepts of TWSVM, LSTSVM, ADMM, and the Lasso technique.

    TWSVM is a classification technique developed to reduce the computational burden of traditional SVM. Instead of finding a single hyperplane to separate two classes, TWSVM constructs two non-parallel hyperplanes. Each hyperplane is positioned closer to one class while maintaining the maximum possible distance from the other. Given a dataset D with m1 and m2 training points labeled +1 and 1, respectively, in Rn, the data points for class +1 are represented by matrix ARm1×n, while matrix BRm2×n represents class 1. The linear TWSVM is defined by:

    xTw1+b1=0

    and

    xTw2+b2=0,

    where w1,w2Rn are normal vectors, and b1,b2R are bias terms. These hyperplanes are obtained by solving two separate optimization problems, each associated with one class:

    minw1,b1,ξ212Aw1+e1b122+c1eT2ξ2s.t.(Bw1+e2b1)+ξ2e2,ξ20, (2.1)

    and

    minw2,b2,ξ112Bw2+e2b222+c2eT1ξ1s.t.(Aw2+e1b2)+ξ1e1,ξ10, (2.2)

    where c1,c2>0 are parameters, ξ1Rm1, ξ2Rm2 are slack vectors, and e1 and e2 are vectors of ones of appropriate dimensions.

    Using the Lagrangian dual method on (2.1) and (2.2), and introducing the Lagrange multipliers αRm2 and βRm1, the resulting Wolfe dual formulations are:

    maxαeT2α12αTG(HTH)1GTαs.t.0αc1, (2.3)

    where

    G=[Be2]

    and

    H=[Ae1].

    And

    maxβeT1β12βTP(QTQ)1PTβs.t.0βc2, (2.4)

    where

    P=[Ae1]

    and

    Q=[Be2].

    Solving (2.3) and (2.4) yields the two non-parallel hyperplanes:

    u=(HTH)1GTα,  where  u=[w1b1]T,v=(QTQ)1PTβ,  where  v=[w2b2]T. (2.5)

    In comparison to SVM, the QPP in TWSVM has fewer parameters than that of SVM since (2.3) and (2.4) each involve only m1 and m2 parameters, unlike SVM's QPP, which depends on m1+m2 parameters.

    In cases involving nonlinear data, the kernel functions are introduced, allowing the two hyperplanes of TWSVM in the kernel space to be represented as follows:

    K(xT,CT)u1+b1=0   and   K(xT,CT)u2+b2=0, (2.6)

    where K is a properly selected kernel,

    C=[AB]T

    and u1,u2Rn. The nonlinear TWSVM optimization problem can be expressed:

    minw1,b1,ξ212K(A,CT)u1+e1b122+c1eT2ξ2s.t.(K(B,CT)u1+e2b1)+ξ2e2,ξ20, (2.7)

    and

    minw2,b2,ξ112K(B,CT)u2+e2b222+c2eT1ξ1s.t.(K(A,CT)u2+e1b2)+ξ1e1,ξ10. (2.8)

    To compute the hyperplanes for nonlinear TWSVM, the dual forms of (2.7) and (2.8) are derived and solved to obtain the hyperplanes in (2.6). However, solving nonlinear TWSVM involves handling two QPPs and requires inverting two matrices of sizes (m1×m1) and (m2×m2), which becomes computationally demanding for large datasets. This adds computational complexity compared to the linear case.

    The LSTSVM is a binary classification technique that creates two non-parallel hyperplanes, which is proposed by Kumar and Gopal. LSTSVM addresses two modified primal QPPs from TWSVM, applying a least squares approach in which inequality constraints are replaced by equalities in (2.1) and (2.2) as:

    minw1,b1,ξ212Aw1+e1b122+c12ξT2ξ2s.t.(Bw1+e2b1)+ξ2=e2, (2.9)

    and

    minw2,b2,ξ112Bw2+e2b222+c22ξT1ξ1s.t.(Aw2+e1b2)+ξ1=e1, (2.10)

    where c1,c2 are the regularization parameters and ξ1,ξ2 are slack variables.

    In (2.9), the QPP incorporates the L2-norm of the slack variable ξ2 with a weight of c12 rather than the L1-norm weighted by c1 as used in (2.1). This change renders the constraint ξ20 unnecessary. As a result, solving (2.9) reduces to solving a system of linear equations. By substituting the equality constraints directly into the objective function, the problem is rewritten as:

    minw1,b112Aw1+e1b122+c12Bw1+e2b1+e222. (2.11)

    Setting the gradient of (2.11) with respect to w1 and b1 to zero yields a closed-form solution for QPP (2.9):

    [w1b1]=(FTF+1c1ETE)1FTe2, (2.12)

    where

    E=[Ae1]

    and

    F=[Be2].

    Similarly, solving for the second hyperplane (2.10) gives:

    [w2b2]=(ETE+1c2FTF)1ETe1. (2.13)

    Therefore, the two nonparallel hyperplanes of LSTSVM can be obtained by inverting two matrices of dimension (n+1)×(n+1), where n is the number of features, which is significantly smaller than the total number of training samples. This makes LSTSVM more computationally efficient than TWSVM while also improving generalization. The nonlinear case follows the same approach, replacing linear terms with kernel functions.

    ADMM was first introduced by Glowinski and Marroco [11] and Gabay and Mercier [12]. ADMM is an iterative optimization algorithm designed to decompose complex problems into manageable subproblems, which are then solved alternately. This approach ensures computational efficiency and makes ADMM particularly suitable for distributed and large-scale problems. ADMM solves optimization problems of the form:

    minx,zf(x)+g(z)s.t.Ax+Bz=c, (2.14)

    where f and g are convex functions, and A, B, and c are given matrices/vectors. Since (2.14) is a constrained minimization problem, we can write the related augmented Lagrangian:

    Lρ(x,z,y)=f(x)+g(z)+yT(Ax+Bzc)+ρ2Ax+Bzc22.

    ADMM operates through a sequence of iterations

    x(k+1):=argminxLρ(x,z(k),y(k)), (2.15)
    z(k+1):=argminzLρ(x(k+1),z,y(k)), (2.16)
    y(k+1):=y(k)+ρ(Ax(k+1)+Bz(k+1)c), (2.17)

    where ρ>0 is the Lagrangian dual variable, which is also called the penalty parameter. The algorithm involves three key steps. First, an x-minimization step optimizes x using the augmented Lagrangian function Lρ while keeping z and the dual variable y fixed. Next, a z-minimization step updates z similarly. Finally, the dual variable y is updated using a step size proportional to the augmented Lagrangian parameter ρ.

    The Lasso. Lasso regularization, which utilizes the L1-norm, is an optimization and machine learning approach designed to reduce overfitting and promote sparsity in model parameters and is often solved using the ADMM because ADMM is well-suited for convex optimization problems with separable objective functions and constraints. The corresponding Lasso formulation is expressed as:

    minβ12Xβy22+τβ1, (2.18)

    where yRn,XRn×p, and τ>0 is a scalar regularization parameter that controls the strength of the penalty.

    ADMM introduces an auxiliary variable z to separate the least squares and L1 penalty terms:

    minβ,z12Xβy22+τz1s.t.βz=0. (2.19)

    ADMM solves this using the augmented Lagrangian:

    Lρ(β,z,u)=12Xβy22+τz1+ρ2βz+u22,

    where u is the scaled dual variable and ρ>0 is the penalty parameter controlling convergence. ADMM then alternates between updating β,z, and u:

    (1) Update β (least squares step):

    β(k+1)=(XTX+ρI)1(XTy+ρ(z(k)u(k))), (2.20)

    where (XTX+ρI) is always invertible, since ρ>0.

    (2) Update z (soft-thresholding step):

    z(k+1)=Sτ/ρ(β(k+1)+u(k)), (2.21)

    where Sτ/ρ is the soft-thresholding operator, which recall is defined as:

    Sε(ν)={νε,if ν>ε,0,if ενε,ν+ε,if ν<ε.

    (3) Update u (dual variable update):

    u(k+1)=u(k)+ρ(β(k+1)z(k+1)). (2.22)

    Lasso LSTSVM by ADMM. Building on the foundational concepts discussed earlier, this section presents the formulation for solving LSTSVM with L1-norm regularization, also known as the Lasso technique, using the ADMM framework. ADMM is chosen over traditional optimization techniques due to its strong scalability and its ability to decompose complex objectives—particularly those involving non-smooth terms like the L1-norm—into simpler subproblems. This makes it especially effective for high-dimensional, large-scale datasets.

    Linear case. In this work, we replace the L2-norm of the penalty term in LSTSVM (2.9) and (2.10) with the L1-norm, allowing the problem to be reformulated in a manner similar to the Lasso technique. This modification promotes sparsity in the slack variables and enables the use of ADMM to decompose the problem into simpler subproblems, improving computational efficiency. The modified optimization problems are formulated as follows:

    minw1,b1,ξ212Aw1+e1b122+c12ξ21s.t.(Bw1+e2b1)+ξ2=e2, (3.1)

    and

    minw2,b2,ξ112Bw2+e2b222+c22ξ11s.t.(Aw2+e1b2)+ξ1=e1, (3.2)

    where c1,c2 are given positive parameters.

    To facilitate efficient computation, we reformulate the problems using auxiliary variables:

    minx1,ξ212Fx122+τ1ξ21s.t.Ex1+ξ2=e2, (3.3)

    and

    minx2,ξ112Ex222+τ2ξ11s.tFx2+ξ1=e1, (3.4)

    where

    F=[Ae1],   E=[Be2],   x1=[w1b1]T,

    and

    x2=[w2b2]T.

    The augmented Lagrangian for the first reformulated problem (e.g., for x1 and ξ2) is given by:

    Lρ(x1,ξ2,y1)=12Fx122+τ1ξ21+yT1(Ex1ξ2+e2)+ρ2Ex1ξ2+e222, (3.5)

    where ρ>0 serves as a penalty parameter and y1 is the dual variable. The ADMM update rules for solving this problem are as follows:

    (1) x1-update: Solve for x(k+1)1:

    x(k+1)1=argminx1Lρ(x1,ξ(k)2,y(k)1),

    resulting in:

    x(k+1)1=(FTF+ρETE)1[ET(ρξ(k)2ρe2y(k)1)].

    (2) ξ2-update: Solve for ξ(k+1)2:

    ξ(k+1)2=argminξ2Lρ(x(k+1)1,ξ2,y(k)1),

    which simplifies to the soft-thresholding operation:

    ξ(k+1)2=Sτ1/ρ(Ex(k+1)1+e2+y(k)1ρ),

    where Sτ1/ρ() is the soft-thresholding operator.

    (3) Dual variable update: Update the dual variable y1:

    y(k+1)1=y(k)1+ρ(Ex(k+1)1ξ(k+1)2+e2).

    The iterative steps for solving the problem using ADMM are summarized in Algorithm 1:

    Algorithm 1 ADMM for solving Lasso LSTSVM 1st plane.
    % initialize ξ2,y1
    ξ(0)2¯ξ2
    y(0)1¯y1
    for k=0,1,2, do
      x(k+1)1=(FTF+ρETE)1[ET(ρξ(k)2ρe2y(k)1)]
      ξ(k+1)2=Sτ1/ρ(Ex(k+1)1+e2+y(k)1ρ)
      y(k+1)1=y(k)1+ρ(Ex(k+1)1ξ(k+1)2+e2)
    end for

    Similarly, the second reformulated problem can be solved using the same concepts as the first problem, as previously demonstrated. The iterative steps for updating x2,ξ1, and y2 are summarized in Algorithm 2:

    Algorithm 2 ADMM for solving Lasso LSTSVM 2nd plane.
    % initialize ξ1,y2
    ξ(0)1¯ξ1
    y(0)2¯y2
    for k=0,1,2, do
      x(k+1)2=(ETE+ρFTF)1[FT(ρξ(k)1ρe1+y(k)2)]
      ξ(k+1)1=Sτ2/ρ(Fx(k+1)2+e1y(k)2ρ)
      y(k+1)2=y(k)2+ρ(Fx(k+1)2+ξ(k+1)1e1)
    end for

    Nonlinear case. In real-world scenarios, the linear kernel method is not always applicable, as large-scale datasets often exhibit higher complexity. Therefore, we extend the algorithm to nonlinear Lasso LSTSVM using kernel techniques [14]. We modify the optimization problems (3.1) and (3.2) as follows:

    minw1,b1,ξ212K(A,X)˜w1+e1b122+c12ξ21s.t.(K(B,X)˜w1+e2b1)+ξ2=e2, (3.6)

    and

    minw2,b2,ξ112K(B,X)˜w2+e2b222+c22ξ11s.t.(K(A,X)˜w2+e1b2)+ξ1=e1. (3.7)

    Where

    K(x,y)=ϕ(x)Tϕ(y)

    is a kernel for any x and y, where function ϕ maps data points x from Rn to Rm(n<m).

    The model can be reformulated as the following:

    minx1,ξ212Gx122+τ1ξ21s.t.Hx1+ξ2=e2, (3.8)

    and

    minx2,ξ112Hx222+τ2ξ11s.tGx2+ξ1=e1, (3.9)

    where

    G=[K(A,X)e1],   H=[K(B,X)e2],   x1=[˜w1b1]T,

    and

    x2=[˜w2b2]T.

    The solution is similar to the linear kernel case: construct the augmented Lagrange function and follow the ADMM framework to update x,ξ and dual variables y.

    As for complexity, solving the linear system contributes O(n3). Our proximal operator is simple. Its complexity is O(n). If k iterations are required, the total complexity is O(k(n3+n)). For convex problems, k scales as O(1/ϵ), so the total complexity becomes O(n3ϵ). The criteria that ensures that our ADMM converges are the feasibility of the primal and dual. It is important to have feasibility in both primal and the dual (Lagrangian variables).

    Accelerated Lasso LSTSVM with guard condition. In this work, we employ an accelerated ADMM framework [15] designed to ensure a consistent reduction in the combined residual γ by introducing a guard condition. This condition regulates the acceleration process, allowing it to proceed when met and reverting to the standard ADMM iteration otherwise. This adaptive mechanism prevents potential instability or divergence that might arise from direct acceleration.

    Let u denote the target vector we aim to compute. The approximation of u at the kth iteration is represented by u(k). For clarity, ˉu(k) represents the approximation of u computed using the standard ADMM method, while ˆu(k) denotes the vector derived from an acceleration strategy applied to the ADMM process. In an acceleration step, the next iteration ˆu(k+1) is calculated by combining current and previous approximations of ˉu, often as follows:

    ˆu(k)=ˉu(k)+α(k)(ˉu(k)ˉu(k1)), (3.10)

    where α(k) is an adaptive parameter governing the acceleration. The proposed accelerated ADMM method, incorporating the guard condition, is formally outlined in Algorithm 3.

    Algorithm 3 Accelerated Lasso LSTSVM with guard condition.
    % initialize ξ2,y1
    ξ(0)2~ξ2
    y(0)1~y1
    for k=0,1,2, do
      x(k+1)1=(FTF+ρETE)1[ET(ρξ(k)2ρe2y(k)1)];
      ˉξ(k+1)2=Sτ1/ρ(Ex(k+1)1+e2+y(k)1ρ)
      ˉy(k+1)1=y(k)1+ρ(Ex(k+1)1ˉξ(k+1)2+e2)
      % Begin of acceleration steps
      ˆξ(k+1)2=ˉξ(k+1)2+α(k)1(ˉξ(k+1)2ˉξ(k)2)
      ˆy(k+1)1=ˉy(k+1)1+α(k)1(ˉy(k+1)1ˉy(k)1)
      % End of acceleration steps
      γ(k+1)=ρ1ˆy(k+1)1ˆy(k)122+ρˆξ(k+1)2ˆξ(k)222
      % Begin of guard condition
      if γ(k+1)<γ(0)ηk+1 then
        ξ(k+1)2=ˆξ(k+1)2
        y(k+1)1=ˆy(k+1)1
      else
        ξ(k+1)2=ˉξ(k+1)2
        y(k+1)1=ˉy(k+1)1
        γ(k+1)=ρ1y(k+1)1y(k)122+ρξ(k+1)2ξ(k)222
      end if
      % End of guard condition
    end for

    In our algorithm, the guard condition is designed to monitor the progress of the accelerated ADMM updates and decide whether to accept the acceleration or revert to standard ADMM updates. To make this decision robust and effective, we adopt the parameter selection strategy proposed in [15]. Specifically, the threshold parameter η(0,1) is used to determine whether the accelerated update yields sufficient improvement in the combined residual. If the reduction is less than a factor of η, the algorithm rejects the acceleration step and reverts to the previous iterate.

    In this work, we set η=0.85, following the empirical recommendation in [15]. This choice reflects a balanced trade-off between acceleration and stability. Additionally, the momentum parameter α(k) is used to control the acceleration step size. Following the stationary acceleration approach proposed in [15], we fix

    α(k)=α

    for all iterations, rather than using a dynamically updated rule. This simplification reduces computational overhead and avoids potential oscillations caused by increasing momentum.

    In particular, [15] provides a convergence proof under the condition that

    α<1/3,

    ensuring the stability of the accelerated scheme. Based on this, we select a fixed value of α within this bound, e.g.,

    α=0.2

    to maintain theoretical convergence guarantees while benefiting from the improved convergence speed that acceleration offers.

    This approach achieves faster convergence by leveraging the vectors computed from standard ADMM iterations as a reference for acceleration steps. Notably, selecting appropriate parameters for the guard condition is crucial to optimizing the method's overall efficiency.

    This section presents a comparative study evaluating the effectiveness of the accelerated Lasso LSTSVM using ADMM with both linear and nonlinear kernels. We assess the classification accuracy and computational efficiency of the standard LSTSVM, Lasso LSTSVM, and its accelerated variant.

    For experiments with linear kernels, we utilize datasets from the UCI machine learning repository [4], including ionosphere, breast cancer, Pima Indians, dry bean, satellite, predict students' dropout and academic success (PSDA), and QSAR biodegradation. For nonlinear classification, we employ EEG eye state and magic telescope (also from the UCI repository), along with the moon dataset—a synthetic dataset from Kaggle [16]—and the electricity dataset [17], a real-world dataset obtained from OpenML.

    All datasets are designed for binary or multi-class classification tasks and span diverse domains such as medicine, physics, and social sciences. Their varying characteristics in terms of sample size, feature dimension, and complexity provide a robust foundation for evaluating the generalization performance of the proposed models across different scenarios.

    As illustrated in Figure 1, a synthetic dataset was constructed to evaluate model performance on linearly separable data. The dataset consists of 500 samples with 2 features and 2 classes. Orange points represent Class 1, while blue points represent Class 2. To simulate challenging conditions, we introduced outliers by mislabeling 10% of the data—an intentionally high proportion, considering that outliers typically account for less than 5% of real-world datasets.

    Figure 1.  A synthetic data set with outlier which led to misclassification.

    We then evaluated the models by splitting the dataset into 80% training and 20% testing data. The results show that Lasso LSTSVM achieved an accuracy of 94%, compared to 92% from the standard LSTSVM. This highlights the Lasso model's robustness to label noise and its ability to generalize better in the presence of significant outlier contamination.

    The classification results are influenced by the selection of penalty parameters, which are set to specific values for linear and nonlinear kernels to ensure a fair comparison. For nonlinear classifiers, the Gaussian RBF kernel is utilized, with the kernel function defined as

    K(xi,xj)=exp(xixj/2σ2),

    with the kernel parameter σ=1 due to its well-established capability to model complex, nonlinear relationships in high-dimensional feature spaces. It is a widely used choice in kernel-based learning methods because it introduces locality and smoothness in decision boundaries, which is beneficial for datasets with intricate structures, such as EEG eye state and magic telescope.

    In our experiments, we fixed the kernel parameter to σ=1 for consistency and to avoid additional hyperparameter tuning that could overshadow the performance comparison between models. However, we acknowledge that the performance of RBF-based models is sensitive to the choice of σ. A very small σ may lead to overfitting by capturing noise, while a large σ can oversmooth decision boundaries and underfit the data. The experiment results are summarized in Tables 14, where m and n indicate the number of training samples and feature dimensions, respectively.

    Table 1.  Accuracy ± std (%) comparisons of two algorithms using linear kernel.
    Dataset (m×n) Standard LSTSVM Lasso LSTSVM
    Ionosphere (351×33) 80.00±4.67 90.91±5.70
    Breast cancer (558×5) 92.47±2.04 91.58±2.55
    Pima indians (768×8) 76.71±6.27 85.53±6.27
    Dry Bean (5573×16) 97.78±0.53 97.76±0.68
    Satellite (5100×36) 99.27±0.32 99.18±0.34
    PSDA (3630×36) 90.77±1.50 91.19±1.24
    QSAR (1055×42) 86.13±4.69 87.94±3.05

     | Show Table
    DownLoad: CSV
    Table 2.  Accuracy ± std (%) comparisons of two algorithms using Gaussian kernel.
    Dataset (m×n) Standard LSTSVM Lasso LSTSVM
    Ionosphere (351×33) 80.00±4.67 90.91±5.70
    Breast cancer (558×5) 92.47±2.04 91.58±2.55
    Pima indians (768×8) 76.71±6.27 85.53±6.27
    Dry Bean (5573×16) 97.78±0.53 97.76±0.68

     | Show Table
    DownLoad: CSV
    Table 3.  Performance comparisons of two algorithms using linear kernel.
    Dataset (m×n) Lasso LSTSVM Accelerated Lasso LSTSVM
    Acc + std(%) time (Sec.) Acc + std(%) time (Sec.)
    Ionosphere (351×33) 90.91±5.70
    1.12
    85.71±5.33
    0.84
    Breast cancer (558×5) 91.58±2.55
    4.32
    91.22±2.46
    4.50
    Pima indians (768×8) 85.53±6.27
    4.93
    85.53±6.26
    4.42
    Dry Bean (5573×16) 97.76±0.68
    10.49
    97.54±0.54
    6.80
    Satellite (5100×36) 99.18±0.34
    37.49
    99.31±0.35
    7.25
    PSDA (3630×36) 91.19±1.24
    9.83
    91.74±1.44
    3.95
    QSAR (1055×42) 87.94±3.05
    4.37
    86.96±3.93
    2.50

     | Show Table
    DownLoad: CSV
    Table 4.  Performance comparisons of two algorithms using Gaussian kernel.
    Dataset (m×n) Lasso LSTSVM Accelerated Lasso LSTSVM
    Acc + std(%) time (Sec.) Acc + std(%) time (Sec.)
    Moon (200×3) 95.50±6.43
    1.91
    96.00±6.58
    1.82
    EEG eye state (14979×14) 85.10±0.76
    141.26
    85.69±0.62
    117.74
    Magic telescope (19020×10) 77.08±0.62
    116.39
    77.18±0.75
    80.15
    Electricity (45312×6) 73.08±0.81
    65.76
    72.86±0.96
    84.96

     | Show Table
    DownLoad: CSV

    The datasets used in this study exhibit varying degrees of outlier presence. Based on the interquartile range (IQR) method [18], most datasets were found to contain a moderate to high number of outliers. Notably, the Pima Indians and QSAR datasets have a particularly high concentration of outliers. For Pima Indians, this is primarily due to missing or implausible values treated as real numbers—features such as insulin, BMI, skin thickness, and blood pressure often contain zeros, which are physiologically impossible and act as placeholders for missing data but are not properly handled in the raw dataset. The QSAR dataset, on the other hand, naturally includes outliers due to its chemical diversity, high dimensionality, and non-normal feature distributions.

    Table 1 compares the classification accuracy of the standard LSTSVM and the Lasso LSTSVM using a 10-fold cross-validation approach. Results show that Lasso LSTSVM outperforms the standard LSTSVM on datasets such as ionosphere, Pima Indians, PSDA, and QSAR, all of which contain relatively high levels of outliers. In contrast, for datasets like breast cancer, dry bean, and satellite, which exhibit fewer outliers, the performance gap between the models is narrower.

    As shown in Table 2, aside from the moon dataset, which is synthetic and generated from Kaggle, the remaining three datasets (EEG eye state, magic telescope, and electricity) contain a considerable number of outliers. Interestingly, the standard LSTSVM performs better than the Lasso LSTSVM on the Moon dataset, while Lasso LSTSVM shows higher accuracy for EEG eye state and magic telescope. However, there are cases—such as the electricity dataset—where the standard LSTSVM still performs better despite the presence of outliers. This can be attributed to the nature of the outliers in this dataset, which result from natural fluctuations rather than sensor errors or data entry mistakes. In such cases, the L2-norm of the standard LSTSVM may better capture the underlying data structure, whereas the L1-norm regularization of Lasso LSTSVM might overly penalize these variations, potentially missing broader trends.

    Tables 3 and 4 compare the computational time between the Lasso LSTSVM and the Accelerated Lasso LSTSVM models. The results clearly demonstrate that, for both linear and Gaussian RBF kernels, the accelerated Lasso LSTSVM significantly reduces computation time while maintaining classification accuracy comparable to that of the standard Lasso LSTSVM. This highlights the effectiveness of the proposed acceleration strategy, particularly for large-scale or high-dimensional datasets where computational efficiency is critical.

    It is widely understood that in regression, ridge regression is used when many predictors or independent variables may be relevant, particularly in multicollinear contexts. In contrast, Lasso regression is employed when we suspect that only a few predictors are significant, allowing for effective feature selection. However, ridge regularization is more sensitive to outliers because it minimizes squared errors, which can lead to biased coefficients that are heavily influenced by those outliers. This principle applies to both the standard LSTSVM models and the Lasso LSTSVM models. Consequently, existing LSTSVM models are likely more sensitive to outliers than the Lasso LSTSVM models.

    However, in some instances, the standard LSTSVM models outperformed the Lasso LSTSVM. This may be due to the standard LSTSVM's ability to handle a larger number of relevant predictors or independent variables more effectively. The standard LSTSVM shrinks the coefficients of irrelevant features without eliminating any coefficients entirely. As a result, if the dataset contains many irrelevant features, using the standard LSTSVM may lead to a complex model with numerous included features since none of the irrelevant coefficients are reduced to zero. In contrast, the standard LSTSVM might perform better than the Lasso LSTSVM when the dataset includes relevant independent variables, as shown in some of ours examples.

    This research presents a comparative analysis between three classification models: the standard LSTSVM, the Lasso LSTSVM, and the proposed accelerated Lasso LSTSVM, using the ADMM framework. The study compares the accuracy of the standard LSTSVM with Lasso LSTSVM and compares the computational time between Lasso LSTSVM and Accelerated Lasso LSTSVM. The inclusion of L1-norm regularization helps make the model more robust to outliers, enabling it to adapt well to datasets with a high number of outliers. Additionally, the acceleration step in the ADMM process helps reduce computation time without compromising classification accuracy.

    Experimental results on several benchmark datasets, both linear and nonlinear, show that the proposed model outperforms the standard LSTSVM in terms of accuracy. Although there are cases where the standard LSTSVM achieves better accuracy than the Lasso type, this can be attributed to various factors revealed through analysis. However, there remains potential for improvement in computational efficiency, particularly for large-scale datasets. The accelerated model effectively reduces the number of iterations and training time compared to the non-accelerated version. Furthermore, the guard condition applied with the acceleration step ensures the algorithm's stability and guarantees reliable results. This study demonstrates that combining robustness to outliers with ADMM tuning and acceleration steps produces an efficient model that is well-suited for real-world data applications. Future research could extend this work by developing adaptive acceleration techniques and investigating theoretical convergence guarantees to achieve even faster and more reliable algorithms for practical use.

    Thanasak Mouktonglang: conceptualization, methodology, validation, formal analysis, investigation, data curation, writing review and editing, supervision, funding acquisition; Rujira Fongmun: methodology, software, validation, investigation, data curation, writing original draft, writing review and editing. All authors have read and agreed to the published version of the manuscript.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research was partially supported by the Chiang Mai University, Faculty of Science, Chiang Mai University. The first author was supported by the Development and Promotion of Science and Technology Talents Project (DPST) scholarship.

    The authors declare no conflicts of interest.



    [1] W. Doaemo, S. Dhiman, A. Borovskis, W. Zhang, S. Bhat, S. Jaipuria, et al., Assessment of municipal solid waste management system in Lae City, Papua New Guinea in the context of sustainable development, Environ. Develop. Sustain., 23 (2021), 18509–18539. https://doi.org/10.1007/s10668-021-01465-2 doi: 10.1007/s10668-021-01465-2
    [2] M. Eskandari, M. Homaee, A. Falamaki, Landfill site selection for municipal solid wastes in mountainous areas with landslide susceptibility, Environ. Sci. Pollut. Res., 23 (2016), 12423–12434. https://doi.org/10.1007/s11356-016-6459-x doi: 10.1007/s11356-016-6459-x
    [3] L. A. Zadeh, Fuzzy sets, Inform. Control, 8 (1965), 338–353. https://doi.org/10.1142/9789814261302_0021 doi: 10.1142/9789814261302_0021
    [4] K. T. Atanassov, Intuitionistic fuzzy sets, Fuzzy Sets Syst., 20 (1986), 87–96.
    [5] E. Szmidt, J. Kacprzyk, Intuitionistic fuzzy sets in some medical applications, in International Conference on Computational Intelligence, Springer, Berlin, Heidelberg, (2001), 148–151.
    [6] R. R. Yager, Pythagorean membership grades in multicriteria decision making, IEEE Trans. Fuzzy Syst., 22(2014), 958–965.
    [7] T. Senapati, R. R. Yager, Fermatean fuzzy sets, J. Amb. Intell. Human. Comput., 11 (2020), 663–674. https://doi.org/10.1007/s12652-019-01377-0 doi: 10.1007/s12652-019-01377-0
    [8] T. Senapati, R. R. Yager, Some new operations over Fermatean fuzzy numbers and application of Fermatean fuzzy WPM in multiple criteria decision making, Informatica, 30 (2019), 391–412.
    [9] T. Senapati, R. R. Yager, Fermatean fuzzy weighted averaging/geometric operators and its application in multi-criteria decision-making methods, Eng. Appl. Artif. Intell., 85 (2019), 112–121. https://doi.org/10.1016/j.engappai.2019.05.012 doi: 10.1016/j.engappai.2019.05.012
    [10] M. Akram, N. Ramzan, F. Feng, Extending COPRAS method with linguistic Fermatean fuzzy sets and Hamy mean operators, J. Math., (2022), Article ID 8239263. https://doi.org/10.1155/2022/8239263
    [11] H. Garg, M. Akram, G. Shahzadi, Decision-making analysis based on Fermatean fuzzy Yager aggregation operators with application in COVID19 testing facility, Math. Problems Eng., (2020), Article ID 7279027. https://doi.org/10.1155/2020/7279027
    [12] P. Liu, Y. Li, X. Zhang, W. Pedrycz, A multiattribute group decision-making method with probabilistic linguistic information based on an adaptive consensus reaching model and evidential reasoning, IEEE Trans. Cybern., (2022). https://doi.org/10.1109/TCYB.2022.3165030
    [13] P. Liu, Y. Li, P. Wang, Opinion dynamics and minimum adjustment-driven consensus model for multi-criteria large-scale group decision making under a novel social trust propagation mechanism, IEEE Trans. Fuzzy Syst., (2022). https://doi.org/10.1109/TFUZZ.2022.3186172
    [14] P. Liu, S. M. Chen, G. Tang, Multicriteria decision making with incomplete weights based on 2-D uncertain linguistic Choquet integral operators, IEEE Trans. Cybern., 51 (2019), 1860–1874.
    [15] F. Herrera, L. Martínez, An approach for combining linguistic and numerical information based on the 2tuple fuzzy linguistic representation model in decision-making, Int. J. Uncert. Fuzz. Knowl. Syst., 8 (2000), 539–562. https://doi.org/10.1142/S0218488500000381 doi: 10.1142/S0218488500000381
    [16] F. Herrera, L. Martínez, A 2tuple fuzzy linguistic representation model for computing with words, IEEE Trans. Fuzzy Syst., 8 (2000), 746–752.
    [17] S. Faizi, W. Sałabun, S. Nawaz, A. Rehman, J. Watróbski, Best-worst method and Hamacher aggregation operations for intuitionistic 2tuple linguistic sets, Expert Syst. Appl., 181 (2021), 115088. https://doi.org/10.1016/j.eswa.2021.115088 doi: 10.1016/j.eswa.2021.115088
    [18] F. Herrera, E. Herrera-Viedma, Linguistic decision analysis: Steps for solving decision problems under linguistic information, Fuzzy Sets Syst., 115 (2000), 67–82. https://doi.org/10.1016/S0165-0114(99)00024-X doi: 10.1016/S0165-0114(99)00024-X
    [19] X. M. Deng, H. Gao, TODIM method for multiple attribute decision making with 2tuple linguistic Pythagorean fuzzy information, J. Intell. Fuzzy Syst., 37 (2019), 1769–1780.
    [20] P. Liu, S. Naz, M. Akram, M. Muzammal, Group decision-making analysis based on linguistic q-rung orthopair fuzzy generalized point weighted aggregation operators, Int. J. Mach. Learn. Cybern., 13 (2022), 883–906. https://doi.org/10.1007/s13042-021-01425-2 doi: 10.1007/s13042-021-01425-2
    [21] E. K. Zavadskas, Z. Turskis, J. Antucheviciene, A. Zakarevicius, Optimization of weighted aggregated sum product assessment, Elektronika ir elektrotechnika, 122 (2012), 3–6.
    [22] E. K. Zavadskas, R. Bausys, M. Lazauskas, Sustainable assessment of alternative sites for the construction of a waste incineration plant by applying WASPAS method with single-valued neutrosophic set, Sustainability, 7 (2015), 15923–15936. https://doi.org/10.3390/su71215792 doi: 10.3390/su71215792
    [23] A. R. Mishra, P. Rani, K. R. Pardasani, A. Mardani, A novel hesitant fuzzy WASPAS method for assessment of green supplier problem based on exponential information measures, J. Clean. Product., 238 (2019), 117901. https://doi.org/10.1016/j.jclepro.2019.117901 doi: 10.1016/j.jclepro.2019.117901
    [24] D. Schitea, M. Deveci, M. Iordache, K. Bilgili, I. Z. Akyurt, I. Iordache, Hydrogen mobility roll-up site selection using intuitionistic fuzzy sets based WASPAS, COPRAS and EDAS, Int. J. Hyd. Energy, 44 (2019), 8585–8600. https://doi.org/10.1016/j.ijhydene.2019.02.011 doi: 10.1016/j.ijhydene.2019.02.011
    [25] A. Mardani, M. K. Saraji, A. R. Mishra, P. Rani, A novel extended approach under hesitant fuzzy sets to design a framework for assessing the key challenges of digital health interventions adoption during the COVID19 outbreak, Appl. Soft Comput., 142 (2017), 403–412. https://doi.org/10.1016/j.asoc.2020.106613 doi: 10.1016/j.asoc.2020.106613
    [26] M. Akram, Z. Niaz, 2-Tuple linguistic Fermatean fuzzy decision-making method based on COCOSO with CRITIC for drip irrigation system analysis, J. Comput. Cognit. Eng., (2022). https://doi.org/10.47852/bonviewJCCE2202356
    [27] P. Rani, A. R. Mishra, K. R. Pardasani, A novel WASPAS approach for multi criteria physician selection problem with intuitionistic fuzzy type-2 sets, Soft Comput., 24 (2020), 2355–2367. https://doi.org/10.1007/s00500-019-04065-5 doi: 10.1007/s00500-019-04065-5
    [28] J. Antucheviciene, J. Saparauskas, MCDM methods WASPAS and MULTIMOORA: Verification of robustness of methods when assessing alternative solutions, Econ. Comput. Econ. Cybern. Stud. Res., 47 (2013), 5–20.
    [29] S. Lashgari, J. Antucheviien, A. Delavari, O. Kheirkhah, Using QSPM and WASPAS methods for determining outsourcing strategies, J. Business Econ. Manag., 15 (2014), 729–743. https://doi.org/10.3846/16111699.2014.908789 doi: 10.3846/16111699.2014.908789
    [30] E. K. Zavadskas, J. Antucheviciene, S. H. R. Hajiagha, S. S. Hashemi, Extension of weighted aggregated sum product assessment with interval-valued intuitionistic fuzzy numbers (WASPAS-IVIF), Appl. Soft Comput., 24(2014), 1013–1021. https://doi.org/10.1016/j.asoc.2014.08.031 doi: 10.1016/j.asoc.2014.08.031
    [31] S. Chakraborty, E. K. Zavadskas, Applications of WASPAS method in manufacturing decision making, Informatica, 25 (2014), 1–20.
    [32] S. Chakraborty, E. K. Zavadskas, J. Antucheviciene, Applications of WASPAS method as a multi-criteria decision-making tool, Econ. Comput. Econ. Cyber. Stud. Res., 49 (2015), 5–22.
    [33] E. K. Zavadskas, S. Chakraborty, O. Bhattacharyya, J. Antucheviciene, Application of WASPAS method as an optimization tool in non-traditional machining processes, Inform. Technol. Control, 44 (2015), 77–88. https://doi.org/10.5755/j01.itc.44.1.7124 doi: 10.5755/j01.itc.44.1.7124
    [34] D. Karabašević, D. Stanujkić, S. Urošević, M. Maksimović, An approach to personnel selection based on SWARA and WASPAS methods, Bizinfo (Blace) J. Econ. Manag. Inform., 7 (2016), 1–11. https://doi.org/10.5937/bizinfo1601001K doi: 10.5937/bizinfo1601001K
    [35] E. K. Zavadskas, D. Kalibatas, D. Kalibatiene, A multi-attribute assessment using WASPAS for choosing an optimal indoor environment, Arch. Civil Mechan. Eng., 16(2016), 76–85.
    [36] A. Mardani, M. Nilashi, N. Zakuan, N. Loganathan, S. Soheilirad, M. Z. M. Saman, et al., A systematic review and meta-analysis of SWARA and WASPAS methods: Theory and applications with recent fuzzy developments, Appl. Soft Comput., 57 (2017), 265–292. https://doi.org/10.1016/j.asoc.2017.03.045 doi: 10.1016/j.asoc.2017.03.045
    [37] R. Bausys, B. Juodagalvien, Garage location selection for residential house by WASPAS-SVNS method, J. Civil Eng. Manag., 23 (2017), 421–429. https://doi.org/10.3846/13923730.2016.1268645 doi: 10.3846/13923730.2016.1268645
    [38] D. Stanujki, D. Karabasevi, An extension of the WASPAS method for decision-making problems with intuitionistic fuzzy numbers: A case of website evaluation, Oper. Res. Eng. Sci. Theory Appl., 1 (2018), 29–39. https://doi.org/10.31181/oresta19012010129s doi: 10.31181/oresta19012010129s
    [39] Z. Turskis, N. Goranin, A. Nurusheva, S. Boranbayev, A fuzzy WASPAS-based approach to determine critical information infrastructures of EU sustainable development, Sustainability, 11 (2019), 424. https://doi.org/10.3390/su11020424 doi: 10.3390/su11020424
    [40] F. K. Gündoğdu, C. Kahraman, Extension of WASPAS with spherical fuzzy sets, Informatica, 30 (2019), 269–292. https://doi.org/10.15388/Informatica.2019.206 doi: 10.15388/Informatica.2019.206
    [41] S. J. H. Dehshiri, M. Aghaei, Identifying and prioritizing solutions to overcome obstacles of the implementation of reverse logistics with a hybrid approach: Fuzzy Delphi, SWARA and WASPAS in the paper industry, Iranian J. Supply Chain Manag., 21 (2019), 85–98.
    [42] P. Rani, A. R. Mishra, Multi-criteria weighted aggregated sum product assessment framework for fuel technology selection using q-rung orthopair fuzzy sets, Sustain. Product. Consumpt., 24 (2020), 90–104. https://doi.org/10.1016/j.spc.2020.06.015 doi: 10.1016/j.spc.2020.06.015
    [43] V. Mohagheghi, S. M. Mousavi, DWASPAS: Addressing social cognition in uncertain decision-making with an application to a sustainable project portfolio problem, Cognit. Comput., 12 (2020), 619–641. https://doi.org/10.1007/s12559-019-09679-3
    [44] D. Sergi, I. Ucal Sari, Prioritization of public services for digitalization using fuzzy Z-AHP and fuzzy Z-WASPAS, Complex Intell. Syst., 7 (2021), 841–856. https://doi.org/10.1007/s40747-020-00239-z doi: 10.1007/s40747-020-00239-z
    [45] K. Rudnik, G. Bocewicz, A. Kucińska-Landwójtowicz, & I. D. Czabak-Górska, Ordered fuzzy WASPAS method for selection of improvement projects, Expert Syst. Appl., 169 (2021), 114471. https://doi.org/10.1016/j.eswa.2020.114471 doi: 10.1016/j.eswa.2020.114471
    [46] M. Badalpur, E. Nurbakhsh, An application of WASPAS method in risk qualitative analysis: A case study of a road construction project in Iran, Int. J. Construct. Manag., 21 (2021), 910–918. https://doi.org/10.1080/15623599.2019.1595354 doi: 10.1080/15623599.2019.1595354
    [47] D. Pamucar, M. Deveci, I. Gokasar, M. Popovic, Fuzzy Hamacher WASPAS decision-making model for advantage prioritization of sustainable supply chain of electric ferry implementation in public transportation, Environ. Develop. Sustain., 24 (2022), 7138–7177. https://doi.org/10.1007/s10668-021-01742-0 doi: 10.1007/s10668-021-01742-0
    [48] M. Yazdani, M. Tavana, D. Pamucar, P. Chatterjee, A rough based multi-criteria evaluation method for healthcare waste disposal location decisions, Comput. Indust. Eng., 143 (2020), 106394. https://doi.org/10.1016/j.cie.2020.106394 doi: 10.1016/j.cie.2020.106394
    [49] A. R. Mishra, A. Mardani, P. Rani, E. K. Zavadskas, A novel EDAS approach on intuitionistic fuzzy set for assessment of health-care waste disposal technology using new parametric divergence measures, J. Clean. Product., 272 (2020), 122807. https://doi.org/10.1016/j.jclepro.2020.122807 doi: 10.1016/j.jclepro.2020.122807
    [50] M. N. Yahya, H. Gokceku, D. U. Ozsahin, B. Uzun, Evaluation of wastewater treatment technologies using TOPSIS, Desalin Water Treat, 177 (2020), 416–422.
    [51] S. Suntrayuth, X. Yu and J. Su, A comprehensive evaluation method for industrial sewage treatment projects based on the improved entropy-TOPSIS, Sustainability, 12 (2020), 6734. https://doi.org/10.3390/su12176734 doi: 10.3390/su12176734
    [52] P. Liu, P. Rani, A. R. Mishra, A novel Pythagorean fuzzy combined compromise solution framework for the assessment of medical waste treatment technology, J. Clean. Product., 292 (2021), 126047. https://doi.org/10.1016/j.jclepro.2021.126047 doi: 10.1016/j.jclepro.2021.126047
    [53] A. Mussa, K. Y. Suryabhagavan, Solid waste dumping site selection using GIS-based multi-criteria spatial modeling: A case study in Logia town, Afar region, Ethiopia, Geol. Ecol. Landscapes, 5 (2021), 186–198. https://doi.org/10.1080/24749508.2019.1703311 doi: 10.1080/24749508.2019.1703311
    [54] B. Aslam, A. Maqsoom, M. D. Tahir, F. Ullah, M. S. U. Rehman, M. Albattah, Identifying and ranking landfill sites for municipal solid waste management: An integrated remote sensing and GIS approach, Buildings, 12 (2022), 605. https://doi.org/10.3390/buildings12050605 doi: 10.3390/buildings12050605
    [55] T. D. Bui, J. W. Tseng, M. L. Tseng, M. K. Lim, Opportunities and challenges for solid waste reuse and recycling in emerging economies: A hybrid analysis, Resour. Conserv. Recycl., 177 (2022), 105968. https://doi.org/10.1016/j.resconrec.2021.105968 doi: 10.1016/j.resconrec.2021.105968
    [56] X. Deng, J. Wang, G. Wei, Some 2tuple linguistic Pythagorean Heronian mean operators and their application to multiple attribute decision-making, J. Exper. Theor. Artif. Intell., 31 (2019), 555–574. https://doi.org/10.1080/0952813X.2019.1579258 doi: 10.1080/0952813X.2019.1579258
    [57] X. Deng, G. Wei, H. Gao, J. Wang, Models for safety assessment of construction project with some 2-tuple linguistic Pythagorean fuzzy Bonferroni mean operators, IEEE Access, 6 (2018), 52105–52137.
    [58] M. Akram, R. Bibi, M. A. AlShamiri, A decision-making framework based on 2tuple linguistic Fermatean fuzzy Hamy mean operators, Math. Problems Eng., (2022), Article ID 1501880. https://doi.org/10.1155/2022/1501880
    [59] M. Akram, Z. Niaz, F. Feng, Extended CODAS method for multi attribute group decision making based on 2-tuple linguistic Fermatean fuzzy Hamacher aggregation operators, Gran. Comput., (2022). https://doi.org/10.1007/s41066-022-00332-3
    [60] N. H. Zardari, K. Ahmed, S. M. Shirazi, Z. B. Yusop, Weighting methods and their effects on multi-criteria decision making model outcomes in water resources management, Springer, (2015).
    [61] T. He, S. Zhang, G. Wei, R. Wang, J. Wu, C. Wei, CODAS method for 2-tuple linguistic Pythagorean fuzzy multiple attribute group decision making and its application to financial management performance assessment, Technol. Econ. Develop. Econ., 26 (2020), 920–932. https://doi.org/10.3846/tede.2020.11970
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2406) PDF downloads(81) Cited by(20)

Figures and Tables

Figures(6)  /  Tables(29)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog