Loading [MathJax]/jax/output/SVG/jax.js
Research article

The orthogonal reflection method for the numerical solution of linear systems

  • Received: 17 March 2025 Revised: 20 May 2025 Accepted: 23 May 2025 Published: 04 June 2025
  • MSC : 65F10

  • This paper extends the convergence analysis of the reflection method from the case of 2 equations to the case of n equations. A novel approach called the orthogonal reflection method is also proposed. The orthogonal reflection method comprises two key steps. First, a Householder transformation is employed to derive an equivalent system of equations with an orthogonal coefficient matrix that maintains the same solution set as the original system. Second, the reflection method is applied to efficiently solve this transformed system. Compared with the reflection method, the orthogonal reflection method significantly enhances the convergence speed, especially when the angles are acute between the hyperplanes represented by the linear system. We also derive the convergence rate for it, demonstrating that the orthogonal reflection method is always convergent for an arbitrary point in Rn. The necessity of orthogonalization is presented in the form of a theorem in R2. When the coefficient matrix has a large condition number, the orthogonal reflection method can still compute relatively accurate numerical solutions rapidly. By comparing with algorithms including Jacobi iteration, Gauss-Seidel iteration, the conjugate gradient method, GMRES, weighted RBAS, and the reflection method on coefficient matrices of 10×10 random matrices, 1000×1000 sparse matrices, and 1000×1000 randomly generated full-rank matrices, the efficiency and robustness of the orthogonal reflection method are demonstrated.

    Citation: Wenyue Feng, Hailong Zhu. The orthogonal reflection method for the numerical solution of linear systems[J]. AIMS Mathematics, 2025, 10(6): 12888-12899. doi: 10.3934/math.2025579

    Related Papers:

    [1] B. El-Sobky, G. Ashry, Y. Abo-Elnaga . An active-set with barrier method and trust-region mechanism to solve a nonlinear Bilevel programming problem. AIMS Mathematics, 2022, 7(9): 16112-16146. doi: 10.3934/math.2022882
    [2] B. El-Sobky, G. Ashry . An interior-point trust-region algorithm to solve a nonlinear bilevel programming problem. AIMS Mathematics, 2022, 7(4): 5534-5562. doi: 10.3934/math.2022307
    [3] B. El-Sobky, M. F. Zidan . A trust-region based an active-set interior-point algorithm for fuzzy continuous Static Games. AIMS Mathematics, 2023, 8(6): 13706-13724. doi: 10.3934/math.2023696
    [4] Habibe Sadeghi, Fatemeh Moslemi . A multiple objective programming approach to linear bilevel multi-follower programming. AIMS Mathematics, 2019, 4(3): 763-778. doi: 10.3934/math.2019.3.763
    [5] Bothina El-Sobky, Yousria Abo-Elnaga, Gehan Ashry . A nonmonotone trust region technique with active-set and interior-point methods to solve nonlinearly constrained optimization problems. AIMS Mathematics, 2025, 10(2): 2509-2540. doi: 10.3934/math.2025117
    [6] Li Dong, Jingyong Tang . New convergence analysis of a class of smoothing Newton-type methods for second-order cone complementarity problem. AIMS Mathematics, 2022, 7(9): 17612-17627. doi: 10.3934/math.2022970
    [7] Yueting Yang, Hongbo Wang, Huijuan Wei, Ziwen Gao, Mingyuan Cao . An adaptive simple model trust region algorithm based on new weak secant equations. AIMS Mathematics, 2024, 9(4): 8497-8515. doi: 10.3934/math.2024413
    [8] Ke Su, Wei Lu, Shaohua Liu . An area-type nonmonotone filter method for nonlinear constrained optimization. AIMS Mathematics, 2022, 7(12): 20441-20460. doi: 10.3934/math.20221120
    [9] Adisak Hanjing, Panadda Thongpaen, Suthep Suantai . A new accelerated algorithm with a linesearch technique for convex bilevel optimization problems with applications. AIMS Mathematics, 2024, 9(8): 22366-22392. doi: 10.3934/math.20241088
    [10] Puntita Sae-jia, Suthep Suantai . A new two-step inertial algorithm for solving convex bilevel optimization problems with application in data classification problems. AIMS Mathematics, 2024, 9(4): 8476-8496. doi: 10.3934/math.2024412
  • This paper extends the convergence analysis of the reflection method from the case of 2 equations to the case of n equations. A novel approach called the orthogonal reflection method is also proposed. The orthogonal reflection method comprises two key steps. First, a Householder transformation is employed to derive an equivalent system of equations with an orthogonal coefficient matrix that maintains the same solution set as the original system. Second, the reflection method is applied to efficiently solve this transformed system. Compared with the reflection method, the orthogonal reflection method significantly enhances the convergence speed, especially when the angles are acute between the hyperplanes represented by the linear system. We also derive the convergence rate for it, demonstrating that the orthogonal reflection method is always convergent for an arbitrary point in Rn. The necessity of orthogonalization is presented in the form of a theorem in R2. When the coefficient matrix has a large condition number, the orthogonal reflection method can still compute relatively accurate numerical solutions rapidly. By comparing with algorithms including Jacobi iteration, Gauss-Seidel iteration, the conjugate gradient method, GMRES, weighted RBAS, and the reflection method on coefficient matrices of 10×10 random matrices, 1000×1000 sparse matrices, and 1000×1000 randomly generated full-rank matrices, the efficiency and robustness of the orthogonal reflection method are demonstrated.



    A nested optimization problem with two levels in a hierarchy, i.e the upper-level and lower-level decision-making, is known as the bilevel programming problem. Each level has distinct constraints and objective functions. Both of them have their objective functions and constraints. The decision-maker at the upper level always takes the lead, followed by those at the lower level. The objective function and constraint of the upper-level programming depend on their decision variables and the optimum solution of the lower-level programming. The decision maker at the lower level must maximize its objective function by using the variables provided by the decision maker at the upper level, who in turn chooses the variables after having full knowledge of the lower level's potential responses. Bilevel mathematical programming stresses the system's non-cooperative nature, in contrast to many objective mathematical programming methodologies. This hierarchical model has several applications, including resource allocation, decentralized control, and network design issues. The successful application of this hierarchical model depends on how well it is solved when handling realistic complications, etc. [3]. Bilevel mathematical programming has attracted a lot of attention, and many effective algorithms have been presented. There are currently several methods for solving NBLP problems and they can be divided into four categories: the Karush-Kuhn-Tucker condition approach [1,15,17,19,21], the penalty function approach [24,29], the descent approach [22,36], and the evolutionary approach [41].

    In this paper, a class of NBLP problems is reduced to a traditional nonlinear programming problem with complementary constraints. The lower-level problem is then substituted by its Karush-Kuhn-Tucker optimality conditions. The CHKS smoothing function is then used to smooth it.

    The smoothed nonlinear programming problem (SNLP) is solved by using the nonmonton active interior-point trust-region technique to obtain an approximately optimal solution to the nonlinear bilevel programming problem. In the nonmonton active interior-point trust-region technique, the smoothed nonlinear programming problem is transformed into an equality-constrained optimization problem with bounded variables by using an active-set approach and the penalty method; for more details see [12,13,14,15,18,19]. To solve the equality- constrained optimization problem with bounded variables, Newton's interior-point approach [6] is used. Because Newton's interior-point method is a local method, it might not converge if the starting point is far from a stationary point. A trust-region strategy is used to treat this problem and ensure convergence to the stationary point from any starting point. A trust-region technique can induce strong global convergence and it is a very important method for solving unconstrained and constrained optimization problems; see [10,11,12,13,14,16,17,18,19]. One advantage of the trust-region technique is that it does not require the models objective function to be convex. However, in the traditional trust-region strategy, we must use some criteria to determine whether the trial step is acceptable after solving a trust-region subproblem. Otherwise, the trust-region radius needs to be reduced. A method for calculating the trust-region radius Δk at each iteration is an important part of trust-region techniques. The standard trust-region strategy is predicated on the objective function and the model agreement. The trust region's radius is updated by paying attention to the ratio tk=AredkPredk where Aredk refers to the actual reduction and Predk refers to the predicted reduction. It can be deduced that whenever tk is close to 1, there will be a good agreement between the model and the objective function over a current trust region. It is well-known that the standard trust-region radius Δk is independent of the gradient and Hessian of the objective function, so we are not able to know if the radius Δk is convenient for the whole implementation. This condition might lead to an increase in the number of subproblems that must be resolved in the method's inner phases which reduces the method's efficiency.

    To overcome this problem many authors proposed various nonmonotone trust-region methods, for example, see [9,31,37,38,43,44]. Motivated by the nonmonotone trust-region strategy in [31], we use it in our method. It is generally promising and efficient, and it can overcome the aforementioned shortcomings.

    Furthermore, the usefulness of the CHKS smoothing function with the nonmonton active interior-point trust-region algorithm to solve the NBLP problems was examined by using several benchmark problems and a real-world case about a watershed trading decision-making problem. Numerical experiments show that the suggested method surpasses rival algorithms in terms of efficacy.

    This paper is organized as follows: Section 2 introduces the mathematical formulation of the NBLP problem, basic definitions of the CHKS smoothing functions, and the smoothing method for the nonlinear complementarity problem to obtain the SNLP problem. Section 3 introduces the nonmonton active interior-point trust region algorithm for solving the SNLP problem. Results from simulations on several benchmark problems and a real-world case about a watershed trading decision-making problem are reported in Section 4. The conclusion is given in Section 5.

    In this paper, we will consider the following NBLP problem

    minauybufu(y,z)s.t.cu(y,z)0,minalzblfl(y,z),s.t.cl(y,z)0, (2.1)

    where yn1 and zn2. The functions fu:n1+n2, fl:n1+n2, cu:n1+n2m1, and cl:n1+n2m2 are assumed to be at least twice continuously differentiable function in our method.

    The NBLP problem (2.1) was reduced to the following one-objective optimization problem using Karush-Kuhn-Tucker optimality assumptions for the lower level problem:

    miny,zfu(y,z)s.t.cu(y,z)0,zfl(y,z)+zcl(y,z)λ=0,cl(y,z)0,λjclj(y,z)=0,j=1,...,m2,λj0,j=1,...,m2,auybu,alzbl, (2.2)

    where λm2 is a multiplier vector associated with the inequality constraint cl(y,z). Problem (2.2) with the nonlinear complementarity condition is non-convex and non-differentiable; moreover, the regularity assumptions required to handle smooth optimization problems are never satisfied and it is not good to use our approach to solve problem (2.2). Due to this, we use the CHKS smoothing function to overcome this problem, see [5,26,35].

    Definition 2.1. The Chen-Mangasarian smoothing function is represented by the notation ˆϕ(g,h):2 and defined by the expression ˆϕ(g,h)=g+h(gh)2. By introducing a smoothing parameter ˜ϵR into the the smoothing function ˆϕ(g,h), we obtain the CHKS smoothing function

    ˆϕ(g,h,˜ϵ)=g+h(gh)2+4˜ϵ2. (2.3)

    The Chen-Mangasarian smoothing function has the property ˆϕ(g,h)=0 if and only if g0, h0, gh=0 but it is non-differentiable at g=h=0. But, the CHKS smoothing function has the property ˆϕ(g,h,˜ϵ)=0 if and only if g0, h0, and gh=˜ϵ2 for ˜ϵ0, and the function is smoothing for g, h, and ˜ϵ0.

    Consequently, by using the CHKS smoothing function (2.3), the problem (2.2) can be approximated as follows:

    miny,zfu(y,z)s.t.cu(y,z)0,zfl(y,z)+zcl(y,z)λ=0,λjclj(λj+clj)2+4˜ϵ2=0,j=1,...,m2,auybu,alzbl. (2.4)

    The above smoothed nonlinear programming problem can be summarized as follows

    minxfu(x)s.t.cu(x)0,ce(x)=0,βaxβb, (2.5)

    where x=(y,z,λ)Tn where n=n1+n2+m2, ce(x)=[zfl(y,z)+zcl(y,z)λ,λjclj(λj+clj)2+4˜ϵ2], j=1,...,m2. The functions fu(x):n, cu(x):nm1, and ce(x):nn2+m2 are twice continuously differentiable and m1<n. We denote the feasible set E={x:βaxβb} and the strict interior feasible set int(E)={x:βa<x<βb} where βa{{}}n, βb{{}}n, and βa<βb.

    Several methods that have been suggested to solve the smoothed nonlinear programming problem (2.5), see [11,12,16,17,19]. The nonmonton active interior-point trust-region algorithm is proposed in this paper to solve problem (2.5) and a detailed description of this algorithm is clarified in the following section.

    In this section, firstly, we will offer a detailed description of the active-set strategy with the penalty method to convert problem (2.5) to an equality-constrained optimization problem with bounded variables. Second, the basic steps for using Newton's interior-point method to solve the equality-constrained optimization problem are presented clearly. Thirdly, the main steps for the nonmontone trust-region algorithm are presented. Finally, the main steps for solving problem 2.1 are introduced.

    Motivated by the active-set strategy proposed in [8] and used in [10,11,12,13,14], we define a 0-1 diagonal matrix D(x)m1×m1, whose diagonal entries are defined as follows:

    di(x)={1if cui(x)0,0if cui(x)<0. (3.1)

    Problem (2.5) is transformed into the following equality-constrained optimization problem using the previous matrix.

    minxfu(x)s.t.ce(x)=0,cu(x)TD(x)cu(x)=0,βaxβb.

    The previous problem is transformed into the following problem by using a penalty method

    minxfu(x)+σu2D(x)cu(x)2s.t.ce(x)=0,βaxβb, (3.2)

    where σu is a positive parameter.

    Motivated by the interior point method in [6], we let L(x,μe,λa,λb) be a Lagrangian function associated with problem (3.2) and it is defined as follows

    L(x,μe,λa,λb)=(x,μe;σu)λTa(xβa)λTb(βbx), (3.3)

    where

    (x,μe)=fu(x)+μTece(x), (3.4)

    and

    (x,μe;σu)=(x,μe)+σu2D(x)cu(x)2, (3.5)

    such that μe, λa, and λb represent the Lagrange multiplier vectors associated with the equality constraint ce(x) and inequality constraints (xβa) and (βbx) respectively. A point xE will be a local minimizer of problem (3.2) if there exists multiplier vectors μem1, λan+, and λbn+ such that (x,μe,λa,λb) satisfies the following Karush-Kuhn-Tucker conditions,

    x(x,μe;σe)λa+λb=0, (3.6)
    ce(x)=0, (3.7)
    λa(j)(x(j)βa(j))=0, (3.8)
    λb(j)(βb(j)x(j))=0, (3.9)

    where

    x(x,μe;σu)=x(x,μe)+σucu(x)D(x)cu(x), (3.10)

    and

    x(x,μe)=fu(x)+ce(x)μe. (3.11)

    Let V(x) be a diagonal matrix whose diagonal elements are as follows:

    v(j)(x)={(x(j)βa(j)),if (x(x,μe;σu))(j)0 and βa(j)>,(βb(j)x(j)),if (x(x,μe;σu))(j)<0 and βb(j)<+,1,otherwise. (3.12)

    For more details see [11,12,20]. Using the scaling matrix V(x), conditions (3.6)–(3.9) are equivalent to the following nonlinear system,

    V2(x)x(x,μe;σu)=0, (3.13)
    ce(x)=0. (3.14)

    For the following reasons, the nonlinear systems (3.13) and (3.14) is continuous but not everywhere differentiable.

    ● It may be non-differentiable when v(j)=0. To overcome this problem, restricting xint(E).

    ● It may be non-differentiable when v(j) has an infinite upper bound and a finite lower bound, and (x(x,μe;σu))(j)=0. To overcome this problem, we define a vector θ(x) whose components θ(j)(x)=((v(j))2)x(j), j=1,...,n are defined as follows

    θ(j)(x)={1,if (x(x,μe;σu))(j)0 and βa(j)>,1,if (x(x,μe;σu))(j)<0 and βb(j)<+,0,otherwise. (3.15)

    If we use Newton's method to solve the nonlinear systems (3.13) and (3.14), we get

    [V2(x)2x(x,μe;σu)+diag(x(x,μe;σu))diag(θ(x))]Δx+V2(x)ce(x)Δμe=V2(x)x(x,μe;σu), (3.16)
    ce(x)TΔx=ce(x), (3.17)

    where

    2x(x,μe;σu)=H+σucu(x)D(x)cu(x)T, (3.18)

    and H is the Hessian of the Lagrangian function (3.4) or an approximation to it.

    Restricting xint(E) is necessary to ensure that the matrix V(x) is nonsingular. Therefore, putting Δx=V(x)s in both Eqs (3.16) and (3.17) and multiplying both sides of equation (3.16) by V1(x), we get

    [V(x)2x(x,μe;σu)V(x)+diag(x(x,μe;σu))diag(θ(x))]s+V(x)ce(x)Δμe=V(x)x(x,μe;σu), (3.19)
    (V(x)ce(x))Ts=ce(x). (3.20)

    It should be noted that the step sk produced by the systems (3.19) and (3.20) is equivalent to the step produced by resolving the following quadratic programming subproblem,

    minx(x,μe;σu)+V(x)x(x,μe;σu)Ts+12sTBss.t.ce(x)+V(x)ce(x)Ts=0, (3.21)

    where

    B=G(x)+σuV(x)cu(x)D(x)cu(x)TV(x), (3.22)

    and

    G(x)=V(x)H(x)V(x)+diag(x(x,μe;σu))diag(θ(x)). (3.23)

    While Newton's approach has the advantage of being quadratically convergent under reasonable assumptions, it also has the disadvantage of requiring that the starting point be close to the solution. The nonmonotone trust-region globalization approach is used to ensure convergence from any starting point. The nonmonotone trust-region globalization strategy is a crucial method for solving a smooth nonlinear unconstrained or constrained optimization problem that can produce substantial global convergence. In the following section, we present the main steps of the nonmontone trust-region algorithm to solve quadratic subproblem (3.21).

    The main steps of the process to apply the nonmontone trust-region algorithm to solve the quadratic subproblem (3.21) are described in the section that follows.

    The trust-region subproblem associated with the problem (3.21) is given by

    minx(x,μe;σu)+V(x)x(x,μe;σu)Ts+12sTBss.t.ce(x)+V(x)ce(x)Ts=0,sδk, (3.24)

    where δk is the radius of the trust-region.

    Due to the possibility of no intersecting points existing between the hyperplane of the linearized constraints ce(x)+V(x)ce(x)Ts=0 and the constraint sδk, subproblem (3.26) may be infeasible. There is no guarantee that this will remain true even if they do intersect if δk is reduced; see [7]. To solve this problem, we used a reduced Hessian strategy. This strategy was suggested in [2,32] and subsequently implemented in [12,13,16,17,18,20]. This strategy divides the step sk into two orthogonal components: the normal component snk for improving feasibility and the tangential component stk for improving optimality. That is, sk=snk+stk and stk=˜Zkˉstk where ˜Zk is a matrix whose columns form a basis for the null space of (Vkcek)T.

    To compute snk

    Obtaining the normal component snk by solving the following trust-region subproblem

    minx12cek+VkcekTsn2s.t.snζδk, (3.25)

    for some ζ(0,1).

    A conjugate gradient method [34] which is very cheap if the problem is large-scale and the Hessian is indefinite, is used to compute the normal component snk. The main steps involved in applying the conjugate gradient method to solve subproblem (3.25) are presented in the following algorithm.

    Algorithm 3.1. : (A conjugate gradient method to calculate snk)

    Step 1. Set sn0=0, rn0=Vkcekcek and pn0=rn0; pick ϵ>0.

    Step 2. For i = 0, 1, .... do

    Compute γni=rniTrnipniTVkcekcTekVkpni.

    Compute τni such that sni+τnpni=δk.

    If γni0, or if γni>τni, then set snk=sni+τnipni and stop.

    Otherwise set sni+1=sni+γnipni and

    rni+1=rniγniVkcekcTekVkpni.

    If rni+1rn0ϵ0,

    set snk=sni+1 and stop.

    Compute ˆβi=rni+1Trni+1rniTrni, and the new direction pni+1=rni+1+ˆβipni.

    Given snk, let q(Vksk) be the quadratic form of the function (3.5) and let it be defined as follows

    q(Vksk)=(xk,μek;σuk)+Vx(xk,μek;σuk)Ts+12sTkBksk. (3.26)

    To compute stk

    To compute the tangential component stk=˜Zkˉstk, solve the following trust-region subproblem

    minx[˜ZTkqk(Vksnk)+Bksnk]Tˉst+12ˉstT˜ZTkBk˜Zkˉsts.t.˜ZkˉstΔk, (3.27)

    where qk(Vksnk)=Vkx(xk,μek;σuk)+Bksnk and Δk=δ2ksnk2. The main steps involved in applying the conjugate gradient method to solve subproblem (3.27) are presented in the following algorithm.

    Algorithm 3.2. (A conjugate gradient method to compute stk)

    Step 1. Set ˉst0=snk, rt0=˜ZTkqk(Vksnk)+Bksnk, pt0=rt0.

    Step 2. For i = 0, 1, .... do

    Compute γti=rTtirtipTti˜ZTkBk˜Zkpti.

    Compute τti such that ˉsti+τtpti=Δk.

    If γti0, or if γti>τti, then set ˉstk=ˉsti+τtipti and stop.

    Otherwise set ˉsti+1=ˉsti+γtipti and

    rti+1=rtiγti˜ZTkBk˜Zkpti.

    If rti+1rt0ϵ,

    set ˉstk=ˉsti+1 and stop.

    Compute ˉβi=rTti+1rti+1rTtirti, and pti+1=rti+1+ˉβipti.

    After computing sk, we set xk+1=xk+Vksk. To ensure that the matrix Vk is nonsingular, we need to guarantee that xk+1intE. So, the damping parameter φk is required at every iteration k.

    To obtain the damping parameter φk

    The following algorithm clarifies the fundamental steps required to obtain the damping parameter φk.

    Algorithm 3.3. (Damping parameter φk)

    Step 1. Compute the parameter ωk as follows

    ω(i)k={β(i)ax(i)kV(i)ks(i)k,ifβ(i)a>andV(i)ks(i)k<01,otherwise.

    Step 2. Compute the parameter υk as follows

    υ(i)k={β(i)bx(i)kV(i)ks(i)k,ifβ(i)b<andV(i)ks(i)k>01,otherwise.

    Step 3. Compute the damping parameter φk as follows

    φk=min{1,{mini{ω(i)k,υ(i)k}}. (3.28)

    Step 4. Set xk+1=xk+φkVksk.

    To determine whether the scaled step φkVksk will be accepted or no, we need a merit function that connects the objective function and the constraints such that progress in the merit function equates to progress in solving the problem. The augmented Lagrangian function that follows is used as a merit function,

    Φ(x,μe;σu;σe)=fu(x)+μTece(x)+σu2D(x)cu(x)2+σece(x)2, (3.29)

    where σe is the penalty parameter.

    To test the scaled step, we use the following scheme to determine the Lagrange multiplier vector μek+1,

    minfuk+1+cek+1μe+σukcuk+1Dk+1cuk+12. (3.30)

    The following actual reduction and the predicted reduction must be defined to determine if the point (xk+1,μek+1) will be accepted in the next iteration or no. The actual reduction in the merit function in moving from (xk,μek) to (xk+φkVksk,μek+1) is defined as

    Aredk=Φ(xk,μek;σuk;σek)Φ(xk+φkVksk,μek+1;σuk;σek).

    The predicted reduction Predk in the merit function (3.29) is defined as follows

    Predk=qk(0)qk(φkVksk)+σek[cek2cek+cTekφkVksk2], (3.31)

    where

    qk(Vkφksk)=(xk,μek)+x(xk,μek)TφkVksk+12φ2ksTkGksk+σuk2Dk(cuk+cTukφkVksk)2. (3.32)

    To ensure that Predk0, the penalty parameter σek must be updated.

    To update the penalty parameter σek

    To ensure that Predk0, we need to update the penalty parameter σek by using the following algorithm.

    Algorithm 3.4. (Process to update σek)

    If

    Predkσek2[cek2cek+cTekφkVksk2], (3.33)

    set

    σek=2[qk(φkVksk)qk(0)]cek2cek+cTekφkVksk2+˜b0, (3.34)

    where ˜b0>0 is a small fixed constant.

    Else, set σek+1=σek.

    End if.

    For more details, see [12].

    To test φkVksk and update δk

    Motivated by the nonmonotone trust-region strategy in [31], we define

    tk=CkΦ(xk+φkVksk,μek+1;σuk;σek)Predk (3.35)

    where

    Ck={Φ(xk;μek;σuk;σek),if k = 0 ,ξk1Qk1Ck1+Φ(xk+φkVksk,μek+1;σuk;σek)Qk,if k1, (3.36)

    and

    Qk={1,if k = 0 ,ξk1Qk1+1,if k1, (3.37)

    such that 0ξminξk1ξmax1 and

    ξk={0.5ξ0,if k = 1 ,0.5(ξk1+ξk2),if k2. (3.38)

    The procedure below introduces the trial step φkVksk for testing and updates the radius δk.

    Algorithm 3.5. (Test φkVksk and update δk)

    Step 0. Choose 0<α1<α21, δmax>δmin, and 0<ζ1<1<ζ2.

    Step 1. (Compute tk)

    Compute ξk by using (3.38).

    Compute Qk as defined in (3.37).

    Compute Ck as defined in (3.36).

    Compute tk=CkΦ(xk+φkVksk,μek+1;σuk;σek)Predk.

    Step 2. (Update the trial step sk)

    While tk<α1, or Predk0,

    set δk=ζ1sk.

    To evaluate a new step sk, go to Algorithms (3.1) and (3.2).

    Step 3. (Update δ)

    If α1tk<α2, then

    set xk+1=xk+φkVksk.

    δk+1=max(δmin,δk).

    End if.

    If tkα2, then

    set xk+1=xk+φkVksk.

    δk+1=min{max{δmin,ζ2δk},δmax}.

    End if.

    To update the parameter σuk

    To update σuk, we use a scheme proposed in [39]. Let a tangential predicted decrease Tpred which is obtained by the tangential component stk be given by

    Tpredk(ˉstk)=qk(Vksnk)qk(Vk(snk+˜Zkˉstk)), (3.39)

    the main steps used to update the parameter σuk is clarified by the following algorithm.

    Algorithm 3.6. (Updating σuk)

    Step 1. Set σu0=1.

    Step 2. If

    TpredkVkcukDkcukmin{VkcukDkcuk,δk}, (3.40)

    then σuk+1=σuk,

    else, σuk+1=2σuk.

    End if

    Finally, the nonmontone trust-region algorithm is terminated when either

    ˜ZTkVkx(xk,μek)+VkcukDkcuk+cekε1

    or skε2, for some ε1,ε2>0.

    Nonmonotone trust-region algorithm

    In the algorithm that follows, we will outline the main steps of the nonmonotone trust-region algorithm in order to solve subproblem (3.21).

    Algorithm 3.7. (The nonmonotone trust-region algorithm)

    Step 0. Start with x0intE. Compute D0, V0, μe0, and θ0. Set σu0=1, σe0=1, and ˜b0=0.1.

    Choose ε1>0 and ε2>0. Choose δmin, δmax, and δ0 such that δminδ0δmax.

    Choose α1, α2, ζ1, and ζ2 such that 0<ζ1<1<ζ2, and 0<α1<α2<1. Set k=0.

    Step 1. (Termination)

    If ˜ZTkVkx(xk,μek)+VkcukDkcuk+cekε1, then stop the algorithm.

    Step 2. (Computing step Vkφksk)

    Evaluate the normal component step snk by using Algorithm (3.1).

    Evaluate the tangential step ˉstk by using Algorithm (3.2).

    Set sk=snk+˜Zkˉstk.

    If skε2, then stop the algorithm.

    Else, compute the parameter φk by using (3.28).

    Set xk+1=xk+Vkφksk.

    End if.

    Step 3. Evaluate Dk+1 which is defined by (3.1).

    Step 4. (Updating the multiplier vector μek+1)

    Computing the Lagrange multiplier vector μek+1 by using (3.30).

    Step 5. (Updating the penalty parameter σe)

    Using Algorithm (3.4) to update the penalty parameter σek.

    Step 6. Compute tk as defined in (3.35) by using both Qk as defined in (3.37) and Ck as defined in (3.36).

    Using Algorithm (3.5) to test the scaling step φkVksk and update the radius of the trust-region.

    Step 7. Updating the parameter σuk using Algorithm (3.6).

    Step 8. Utilize (3.12) to evaluate the matrix Vk+1.

    Step 9. Set k=k+1 and go to Step 1.

    The global convergence theory for Algorithm (3.7) is similar to the global convergence theory of the algorithm presented in ([12]) when used to solve a general nonlinear programming problem.

    We will outline the main steps of the nonmontone active-set interior-point trust-region algorithm based on the CHKS smoothing function to resolve problem (2.1) in the following algorithm.

    Algorithm 3.8. (CHKS nonmontone active-set interior-point trust-region algorithm)

    Step 1. Reduce the nonlinear bilevel programming problem (2.1) to the one-objective optimization problem (2.2) using Karush-Kuhn-Tucker optimality assumptions for the lower level problem.

    Step 2. Use the CHKS smoothing function to convert non-deferential problem (2.2) to smoothing problem (2.4).

    Step 3. Utilize an active-set strategy to convert the nonlinearly constrained optimization problem (2.4) to the equality constrained optimization problem with bounded variables (3.2).

    Step 4. Utilize an interior-point method with the diagonal scaling matrix V(x) given in (3.12) to obtain the nonlinear system [(3.13) - (3.14)].

    Step 5. Utilize Newton's method with diagonal matrix θ as defined in (3.15), to solve the nonlinear system [(3.13)-(3.14)] and obtain the equivalent subproblem (3.21).

    Step 6. Solve subproblem (3.21) by using nonmonton trust-region given by Algorithm (3.7).

    In this section, we introduce an extensive variety of possible numeric NBLP problems to illustrate the validity of the proposed Algorithm (3.8) to solve problem (2.1).

    We shall introduced the MATLAB numerical results for Algorithm (3.8) with a starting point x0int(E). The parameter setting that follows was utilized: α1=102, α2=0.75, ζ1=0.5, ζ2=2, δmin=103, δ0=max(scp0,δmin), δmax=103δ0, ε1=1010, and ε2=1012.

    To demonstrate the efficacy of the suggested Algorithm (3.8) in obtaining the solution to NBLP problem (2.1), we offer a wide variety of possible numeric NBLP problems in this section. Fifteen benchmark instances from [4,22,27,30,33] were used to test the proposed Algorithm (3.8).

    To check the consistency of the results, 10 independent runs using different initial starting points were carried out for each test example. Table 1 summarizes the statistical data for all examples and demonstrates that the proposed Algorithm (3.8) results are approximate or equal to those of the compared algorithms in the method proposed in [18] and the literature.

    Table 1.  Comparison of the results of Algorithm (3.8) with those of various existing methods(ref).
    Problem (y,z) fu (y,z) fu (y,z) fu
    fl fl fl
    name Algorithm (3.8) Algorithm (3.8) Method [18] Method [18] Ref. Ref.
    TP1 (0.8438, 0.7657, 0) -2.0769 (.8465, 0.7695, 0) -2.0772 (0.8438, 0.7657, 0) -2.0769
    -0.5863 -0.5919 -0.5863
    TP2 (0.6111, 0.3890, 0, 0.64013 (0.6111, 0.3890, 0, 0.64013 (0.609, 0.391, 0, 0.6426
    0, 1.8339) 1.6816 0, 1.8339) 1.6816 0, 1.828) 1.6708
    TP3 (0.97, 3.14, -8.92 (0.97, 3.14 -8.92 (0.97, 3.14, -8.92
    2.6, 1.8) -6.05 2.6, 1.8) -6.05 2.6, 1.8) -6.05
    TP4 (.5, .5, .5, .5) -1 (0.5, 0.5, 0.5, 0.5) -1 (0.5, 0.5, 0.5, 0.5) -1
    0 0 0
    TP5 (9.9998, 9.9998) 99.9996 (9.9953, 9.9955) 99.907 (10.03, 9.969) 100.58
    3.7160e-07 1.8628e-04 0.001
    TP6 (1.8884, 0.89041, 0) -1.2067 (1.8889, 0.88889, -1.4074 NA 3.57
    7.6062 6.8157e-06) 7.6172 2.4
    TP7 (1, 0) 17 (1, 0) 17 (1, 0) 17
    1 1 1
    TP8 (0.75, 0.75, -2.25 (0.7513, 0.7513, -2.2480 (3/2, 3/2, -2.1962
    0.75, 0.75) 0 0.752, 0.752) 0 3/2, 3/2) 0
    TP9 (11.25, 5) 2250 (11.25, 5) 2250 (11.25, 5) 2250
    197.753 197.753 197.753
    TP10 (1, 0, 0) 0 (1, 0, 1) 1 (1, 0, 1) 1
    0 -1 -1
    TP11 (25, 30, 5.0024 (25, 30, 5, 10) 5 (25, 30, 5, 10) 5
    4.9996, 10) 1.5000e-06 0 0
    TP12 (3, 5) 9 (3, 5) 9 (3, 5) 9
    0 0 0
    TP13 (0, 2, 1.875, -18.679 (0, 2, 1.875, 0.9063) -12.68 (0, 2, 1.875, 0.9063) -12.68
    0.90625) -1.0156 -1.016 -1.016
    TP14 (10.016, 0.81967) 81.328 (10, 0.011) 8.1978e+01 (10.04, 0.1429) 82.44
    -0.33593 0 0.271
    TP15 (0, 0.9, 0, 0.6, 0.4) -29.2 (0, 0.9, 0, 0.6, 0.4) -29.2 (0, 0.9, 0, 0.6, 0.4) -29.2
    3.2 3.2 3.2

     | Show Table
    DownLoad: CSV

    For purposes of comparison, in Table 2 we have provided the corresponding results of the average number of iterations (iter), the average number of function evaluations (nfunc), and the average amount of CPU time (CPUs) in seconds as obtained via the method in [18]. The results in Table 2 demonstrate that the proposed Algorithm (3.8) results are approximative or the best of those obtained via the comparative algorithms in the literature. Results show that our proposed method is capable of handling the NBLP problem (2.1) whether the upper or lower levels are convex or not, and the calculated results converge to the optimal solution that is approximate or equal to the optimal solution stated in the literature. Finally, it is evident from a comparison of the solutions obtained via the proposed Algorithm (3.8) and those found in the literature that the proposed Algorithm (3.8) is capable of finding the best solution to some problems under conditions of a small amount of time, fewer function evaluations, and fewer iterations.

    Table 2.  Comparison of the results of Algorithm (3.8) with those of the method in [18], with respect to the number of iterations, the number of function evaluations, and time.
    Problem iter nfunc CPU(s) iter nfunc CPUs
    name Algorithm (3.8) Algorithm (3.8) Algorithm (3.8) Method [18] Method [18] Method [18]
    TP1 10 11 1.54 10 13 1.62
    TP2 8 9 1.78 9 12 1.87
    TP3 6 8 2.1 7 8 2.52
    TP4 10 12 1.76 12 13 1.92
    TP5 6 8 1.65 6 7 1.523
    TP6 7 9 3.5 8 10 3.95
    TP7 11 13 1.8 11 12 1.652
    TP8 4 5 1.569 11 12 0.953
    TP9 9 11 2.23 8 10 1.87
    TP10 4 7 2.887 5 6 3.31
    TP11 9 11 3.542 10 13 3.632
    TP12 4 5 1.12 7 9 1.33
    TP13 5 7 2.1 5 8 1.998
    TP14 5 6 1.87 5 6 1.97
    TP15 5 6 20.212 6 7 20.125

     | Show Table
    DownLoad: CSV

    Test Problem 1 [28]:

    miny1fu=z21+z22+y214y1s.t.0y12,minz1,z2fl=z21+0.5z22+z1z2+(13y1)z1+(1+y1)z2,s.t.2z1+z22y11,z10,z20.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 1 is converted to the following SNLP problem

    miny1,z1,z2,λfu=z21+z22+y214y1s.t.2z1+z2+(13y1)+(1+y1)+2λ=0,z1+z2+(1+y1)+λ=0,λ(2z1+z22y11)(λ+2z1+z22y11)2+4˜ϵ2=0,0y12,z10,z20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,z1,z2)=(0.8438,0.7657,0), fu=2.0769, and fl=0.5863.

    Test Problem 2 [28]:

    miny1,y2fu=z21+z23z1z34z27y1+4y2s.t.y1+y21,y10,y20minz1,z2,z3fl=z21+0.5z22+0.5z23+z1z2+(13y1)z1+(1+y2)z2,s.t.2z1+z2z3+y12y2+20,z10;z20z30.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 2 is converted to the following SNLP problem

    miny1,y2,z1,z2,z3,λfu=z21+z23z1z34z27y1+4y2s.t.2z1+z2+(13y1)+2λ=0,z1+z2+(1+y2)+λ=0,z3λ=0,λ(2z1+z2z3+y12y2+2)(λ+2z1+z2z3+y12y2+2)2+4˜ϵ2=0,y1+y21,y10;y20z10;z20z30.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,y2,z1,z2,z3)=(0.6111,0.3890,0,0,1.8339), fu=0.64013, and fl=1.6816.

    Test Problem 3 [28]:

    miny1,y2fu=0.1(y21+y22)3z14z2+0.5(z21+z22)s.t.minz1,z2fl=0.5(z21+5z22)2z1z2y1z1y2z2,s.t.0.333z1+z220,z10.333z220,z10,z20.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 3 is converted to the following SNLP problem

    miny1,y2,z1,z2,λ1,λ2fu=0.1(y21+y22)3z14z2+0.5(z21+z22)s.t.z12z2y10.333λ1+λ2=0,5z22z1y2+λ10.333λ2=0,λ1(0.333z1+z22)(λ10.333z1+z22)2+4˜ϵ2=0,λ2(z10.333z22)(λ2+z10.333z22)2+4˜ϵ2=0,y10,y20,z10,z20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,y2,z1,z2)=(0.97,3.14,2.6,1.8), fu=8.92, and fl=6.05.

    Test Problem 4 [28]:

    miny1,y2fu=y212y1+y222y2+z21+z22S.Ty10,y20minz1,z2fl=(z1y1)2+(z2y2)2,s.t.0.5z11.5,0.5z21.5.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 4 is converted to the following SNLP problem

    miny1,y2,z1,z2fu=y212y1+y222y2+z21+z22s.t2(z1y1)=0,2(z2y2)=0,0.5z11.5,0.5z21.5,y10,y20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,y2,z1,z2)=(0.5,0.5,0.5,0.5), fu=1, and fl=0.

    Test Problem 5 [28]:

    minyfu=y2+(z10)2s.t.y+z0,0y15,minzfl=(y+2z30)2,s.t.y+z20,0z20.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 5 is converted to the following SNLP problem

    miny,z,λfu=y2+(z10)2s.t4(y+2z30)+λ=0,λ(y+z20)(λ+y+z20)2+4˜ϵ2=0,y+z0,0y15,0z20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y,z)=(9.9998,9.9998), fu=99.9996, and fl=3.7160e07.

    Test Problem 6 [28]:

    miny1fu=(y11)2+2z212y1s.t.y10,minz1,z2fl=(2z14)2+(2z21)2+y1z1,s.t.4y1+5z1+4z212,4y15z1+4z24,4y14z1+5z24,4y1+4z1+5z24,z10,z20.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 6 is converted to the following SNLP problem

    miny1,z1,z2,λ1,λ2,λ3,λ4fu=(y11)2+2z212y1s.t.4(2z14)+y1+5λ15λ24λ3+4λ4=0,4(2z21)+4λ1+4λ2+5λ3+5λ4=0,λ1(4y1+5z1+4z212)(λ1+4y1+5z1+4z212)2+4˜ϵ2=0,λ2(4y15z1+4z2+4)(λ24y15z1+4z2+4)2+4˜ϵ2=0,λ3(4y14z1+5z24)(λ3+4y14z1+5z24)2+4˜ϵ2=0,λ4(4y1+4z1+5z24)(λ44y1+4z1+5z24)2+4˜ϵ2=0,y10,z10,z20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,z1,z2)=(1.8884,0.89041,0), fu=1.2067, and fl=7.6062.

    Test Problem 7 [28]:

    minyfu=(y5)2+(2z+1)2s.t.y0,minzfl=(2z1)21.5yz,s.t.3y+z3,y0.5z4,y+z7,z0.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 7 is converted to the following SNLP problem

    miny,z,λ1,λ2,λ3fu=(y5)2+(2z+1)2s.t.4(2z1)1.5y+λ10.5λ2+λ3=0,λ1(3y+z+3)(λ13y+z+3)2+4˜ϵ2=0,λ2(y0.5z4)(λ2+y0.5z4)2+4˜ϵ2=0,λ3(y+z7)(λ2+y+z7)2+4˜ϵ2=0,y0,z0.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y,z)=(1,0), fu=17, and fl=1.

    Test Problem 8 [28]:

    miny1,y2fu=y213y1+y223y2+z21+z22s.t.y10,y20,minz1,z2fl=(z1y1)2+(z2y2)2,s.t.0.5z11.5,0.5z21.5.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 8 is converted to the following SNLP problem

    miny1,y2,z1,z2fu=y213y1+y223y2+z21+z22s.t.2(z1y1)=0,2(z2y2)=0,0.5z11.5,0.5z21.5,y10,y20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,y2,z1,z2)=(0.75;0.75;0.75;0.75), fu=2.25, and fl=0.

    Test Problem 9 [23]:

    minyfu=16y2+9z2s.t.4y+z0,y0,minzfl=(y+z20)4,s.t.4y+z500,z0.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 9 is converted to the following SNLP problem

    miny,z,λfu=16y2+9z2s.t4(y+z20)3+λ=0,λ(4y+z50)(λ+4y+z50)2+4˜ϵ2=0,4y+z0,y0,z0.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y,z)=(11.25,5), fu=2250, and fl=197.753.

    Test Problem 10 [23]:

    miny1fu=y31z1+z2s.t.0y11,minz1,z2fl=z2s.t.y1z110,z21+y1z21,z20.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 10 is converted to the following SNLP problem

    miny1,z1,z2,λ1,λ2fu=y31z1+z2s.t.λ1y1+2λ2z1=0,1+λ2y1=0,λ1(y1z110)(λ1+y1z110)2+4˜ϵ2=0,λ2(z21+y1z21)(λ2+z21+y1z21)2+4˜ϵ2=0,0y11,z20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,z1,z2)=(1,0,0), fu=0, and fl=0.

    Test Problem 11 [40]:

    miny1,y2fu=2y1+2y23z13z260s.t.y1+y2+z12z240,0y150,0y250,minz1,z2fl=(z1y1+20)2+(z2y2+20)2,s.t.y12z110,y22z210,10z120,10z220.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 11 is converted to the following smooth nonlinear programming problem

    miny1,y2,z1,z2,λ1,λ2fu=2y1+2y23z13z260s.t.2(z1y1+20)+2λ1=0,2(z2y2+20)+2λ2=0,λ1(10y1+2z1)(λ1+10y1+2z1)2+4˜ϵ2=0,λ2(10y2+2z2)(λ2+10y2+2z2)2+4˜ϵ2=0,y1+y2+z12z240,0y150,0y250,10z120,10z220.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,y2,z1,z2)=(25,30,4.9996,10), fu=5.0024, and fl=1.5000e06.

    Test Problem 12 [23]:

    minyfu=(y3)2+(z2)2s.t.2y+z10,y2z+20,y+2z140,0y8,minzfl=(z5)2s.t.z0.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 12 is converted to the following SNLP problem

    miny,zfu=(y3)2+(z2)2s.t.2(z5)=0,2y+z10,y2z+20,y+2z140,0y8,z0.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y,z)=(3,5), fu=9, and fl=0.

    Test Problem 13 [40]:

    miny1,y2fu=y213y224z1+z22s.t.y21+2y24,y10,y20,minz1,z2fl=2y21+z215z2,s.t.y212y1+2y222z1+z23,y2+3z14z24,z10,z20.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 13 is converted to the following SNLP problem.

    miny1,y2,z1,z2,λ1,λ2fu=y213y224z1+z22s.t.2z1+2λ13λ2=0,5λ1+4λ2=0,λ1(y21+2y12y22+2z1z23)(λ1y21+2y12y22+2z1z23)2+4˜ϵ2=0,λ2(y23z1+4z24)(λ2y23z1+4z24)2+4˜ϵ2=0,y21+2y24,y10,y20,z10,z20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,y2,z1,z2)=(0,2,1.875,0.90625), fu=18.679, and fl=1.0156.

    Test Problem 14 [40]:

    minyfu=(y1)2+(z1)2s.t.y0,minzfl=0.5z2+500z50yzs.t.z0.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 14 is converted to the following SNLP problem.

    miny,zfu=(y1)2+(z1)2s.t.z50y+500=0y0,z0.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y,z)=(10.016,0.81967), fu=81.328, and fl=0.33593.

    Test Problem 15 [40]:

    miny1,y2fu=8y14y2+4z140z24z3s.t.y10,y20minzfl=y1+2y2+z1+z2+2z3,s.t.z2+z3z11,2y1z1+2z20.5z31,2y2+2z1z20.5z31,zi0,i=1,2,3.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 15 is converted to the following SNLP problem,

    miny1,y2,z1,z2,z3,λ1,λ2,λ3fu=8y14y2+4z140z24z3s.t.1λ1λ2+2λ3=0,1+λ1+2λ2λ3=0,2+λ10.5λ20.5λ3=0,λ1(z2+z3z11)(λ1+z2+z3z11)2+4˜ϵ2=0,λ2(2y1z1+2z20.5z31)(λ2+2y1z1+2z20.5z31)2+4˜ϵ2=0,λ3(2y2+2z1z20.5z31)(λ3+2y2+2z1z20.5z31)2+4˜ϵ2=0,y10,y20,zi0,i=1,2,3.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,y2,z1,z2,z3)=(0,0.9,0,0.6,0.4), fu=29.2, and fl=3.2.

    In this section, the efficacy of the proposed Algorithm (3.8) was tested by using a watershed water trading decision-making model based on bilevel programming [42]. The upper decision-maker is the watershed management agency which acts as the planning, controlling and coordinating center of a watershed, and each user is the lower decision-maker. The mathematical formulation for the watershed water trading decision-making model is formulated as follows:

    maxw,t,r1,g1,r2,g2F=0.4W+t(q1+q2)+f1+f2,s.t.r1+r2+w=90,q1+q2+w90,g1+g2=20,r138, r242, g17, g28, w6, 0.3t2.0,maxq1,l1f1=0.7q1q1t0.3(45q1)2+(r1q1)[0.90.01(r1+r2q1q2)],0.2(0.2q1l1)2+(g1l1)[0.80.01(g1+g2l1l2)],maxq2,l2f2=0.8q2q2t0.2(47q2)2+(r2q2)[0.90.01(r1+r2q1q2)],0.1(0.3q2l2)2+(g2l2)[0.80.01(g1+g2l1l2)],s.t.l1+l220,q1, l10,q2, l20

    where q1 and q2 are the actual water intake volumes of water consumer A and water consumer B respectively. l1 and l2 are the wastewater discharge volumes of two users respectively. r1 and r2 are the water rights of the two users respectively. g1 and g2 are the emission rights of two users respectively. w is the ecological water requirement of the watershed. t represents the water resource fees. Applying the Karush-Kuhn-Tucker condition to the lower-level problem and using the CHKS smoothing function, the watershed water trading decision-making model is converted to the following SNLP problem,

    maxw,t,r1,g1,r2,g2F=0.4W+t(q1+q2)+f1+f2,s.t.r1+r2+w=90,g1+g2=20,0.8t+0.4(47q2)+0.01(r2q2)(0.90.01(r1+r2q1q2))0.06(0.3q2l2)=0,0.2(0.3q2l2)+0.01(g2l2)(0.80.01(g1+g2l1l2))λ1=0,λ1(l1+l220)(λ1+l1+l220)2+4˜ϵ2=0,0.7t+0.6(45q1)+0.01(r1q1)(0.90.01(r1+r2q1q2)).08(0.2q1l1)0.01λ2=0,0.4(0.2q1l1)+0.01(g1l1)(0.80.01(g1+g2l1l2))0.01λ3+λ4(1λ1+l1+l220(λ1+l1+l220)2+4˜ϵ2)=0,q1+q2+w90,g17,g28,r138,r242,w6,0.3t2.0,q1, l10,q2, l20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have the following data at ˜ϵ=0.001. g1=9, g2=10.450, l1=6.8755, l2=9.0751, q1=41.497, q2=43, r1=39.332, r2=44.269, t=0.3, w=6.2338, f1=12.17, f2=19.043, and F=59.055.

    Table 3 compares the obtained results with method [25] and those of various existing methods(Ref). From Table 3, we can see that the solutions obtained via Algorithm (3.8) are better than those given in [25] and those of various existing methods(Ref). Therefore, Algorithm (3.8) can also be applied to the practical problem.

    Table 3.  Comparison of the results of Algorithm (3.8) with method [25] and with those of various existing methods(ref) for a watershed water application.
    Parameter Method in [25] Algorithm (3.8) Results in Ref.
    q1 42 41.497 41.2016
    q2 41.9680 43 42.5388
    l1 6.9984 6.8755 6.4772
    l2 9.1751 9.0751 9.1611
    r1 39.9679 39.332 39.4861
    r2 44 44.269 44.2542
    g1 9 9 9.0015
    g2 11 10.45 10.9985
    w 6.0321 6.2338 6.2596
    t 0.3000 0.3 0.3226
    F 58.97837 59.055 58.4482
    f1 13.40286 12.17 10.964
    f2 17.97226 19.043 17.964

     | Show Table
    DownLoad: CSV

    In this paper, Algorithm (3.8) has been presented to solve the NBLP problem (2.1). A Karush-Kuhn-Tucker condition is used with the CHKS smoothing function to convert the NBLP problem (2.1) into a standard SNLP problem. The SNLP problem is solved by using the proposed nonmonton active interior-point trust-region technique to obtain an approximately optimal solution to the NBLP problem. In the proposed nonmonton active interior-point trust-region technique, the smoothed nonlinear programming problem is transformed into an equality-constrained optimization problem with bounded variables by using an active-set approach and the penalty method. To solve the equality-constrained optimization problem with bounded variables, Newton's interior-point method is used. But it is a local method so it might not converge if the starting point is far from a stationary point. To overcome this problem, the nonmonotone trust-region strategy is used. It is generally promising and efficient, and it can overcome the aforementioned shortcomings.

    Applications to mathematical programs with equilibrium constraints are given to clarify the effectiveness of the proposed approach. Numerical results reflect the good behavior of the proposed technique and the computed results converge to the optimal solutions. It is clear from the comparison between the solutions obtained by using Algorithm (3.8) and the algorithm of [18], that Algorithm (3.8) is able to find the optimal solution of some problems with fewer iterations, fewer function evaluations, and less time.

    Furthermore, the usefulness of the CHKS smoothing function with the nonmonton active interior-point trust-region algorithm in efforts to solve NBLP problems was examined by using a real-world case about a watershed trading decision-making problem. Numerical experiments show that the suggested method surpasses rival algorithms in terms of efficacy.

    The authors declare that they have not used artificial intelligence tools in the creation of this article.

    The authors declare that they have no competing interests.



    [1] A. M. Childs, R. Kothari, R. D. Somma, Quantum algorithm for systems of linear equations with exponentially improved dependence on precision, SIAM J. Comput., 46 (2017), 1920–1950. https://doi.org/10.1137/16M1087072 doi: 10.1137/16M1087072
    [2] E. Carson, N. J. Higham, Accelerating the solution of linear systems by iterative refinement in three precisions, SIAM J. Sci. Comput., 40 (2018), 817–847. https://doi.org/10.1137/17m1140819 doi: 10.1137/17m1140819
    [3] N. J. Higham, S. Pranesh, Exploiting lower precision arithmetic in solving symmetric positive definite linear systems and least squares problem, SIAM J. Sci. Comput., 43 (2021), 258–277. https://doi.org/10.1137/19M1298263 doi: 10.1137/19M1298263
    [4] T. A. Davis, S. Rajamanickam, W. M. S. Lakhdar, A survey of direct methods for sparse linear systems, Acta Numer., 25 (2016), 383–566. https://doi.org/10.1017/S0962492916000076 doi: 10.1017/S0962492916000076
    [5] H. A. V. D. Vorst, Computer solution of large linear systems, Elsevier, 1999. https://doi.org/10.1016/0377-0427(94)90302-6
    [6] Z. Zlatev, Use of iterative refinement in the solution of sparse linear systems, SIAM J. Numer. Anal., 19 (1982), 381–399. https://doi.org/10.1137/0719024 doi: 10.1137/0719024
    [7] H. Y. Huang, A direct method for the general solution of a system of linear equations, J. Optimiz. Theory Appl., 16 (1975), 429–445. https://doi.org/10.1007/bf00933852 doi: 10.1007/bf00933852
    [8] E. Spedicato, M. T. Vespucci, Variations on the Gram-Schmidt and the Huang algorithms for linear systems: A numerical study, Appl. Math., 38 (1993), 81–100. https://doi.org/10.21136/AM.1993.104537 doi: 10.21136/AM.1993.104537
    [9] D. M. Young, Iterative solution of large linear systems, Elsevier, 2014.
    [10] V. Simoncini, D. B. Szyld, Recent developments in Krylov subspace methods for linear systems, Numer. Linear Algebr., 14 (2007), 1–59. https://doi.org/10.1002/nla.499 doi: 10.1002/nla.499
    [11] L. Tondji, D. A. Lorenz, Faster randomized block sparse Kaczmarz by averaging, Numer. Algorithms, 93 (2023), 1417–1451. https://doi.org/10.1007/s11075-022-01473-x doi: 10.1007/s11075-022-01473-x
    [12] V. Patel, M. Jahangoshahi, D. A. Maldonado, Randomized block adaptive linear system solvers, SIAM J. Matrix Anal. A., 44 (2023), 1349–1369. https://doi.org/10.1137/22M1488715 doi: 10.1137/22M1488715
    [13] M. Guida, C. Sbordone, The reflection method for the numerical solution of linear systems, SIAM Rev., 65 (2023), 1137–1151. https://doi.org/10.1137/22M1470463 doi: 10.1137/22M1470463
    [14] G. Cimmino, Approximate computation of the solutions of systems of linear equations, Rend. Accad. Sci. Fis. Mat. Napoli, 89 (2022), 65–72.
    [15] M. Benzi, Gianfranco Cimmino's contributions to numerical mathematics, Atti del Seminario di Analisi Matematica, Dipartimento di Matematica dell'Universita di Bologna. Volume Speciale: Ciclo di Conferenze in Ricordo di Gianfranco Cimmino, 2004, 87–109.
    [16] Y. Censor, T. Elfving, New methods for linear inequalities, Linear Algebra Appl., 42 (1982), 199–211. https://doi.org/10.1016/0024-3795(82)90149-5 doi: 10.1016/0024-3795(82)90149-5
    [17] T. Elfving, A projection method for semidefinite linear systems and its applications, Linear Algebra Appl., 391 (2004), 57–73. https://doi.org/10.1016/j.laa.2003.11.025 doi: 10.1016/j.laa.2003.11.025
    [18] Y. Censor, M. D. Altschuler, W. D. Powlis, On the use of Cimmino's simultaneous projections method for computing a solution of the inverse problem in radiation therapy treatment planning, Inverse Probl., 4 (1988), 607. https://doi.org/10.1088/0266-5611/4/3/006 doi: 10.1088/0266-5611/4/3/006
    [19] S. Petra, C. Popa, C. Schn¨orr, Extended and constrained Cimmino-type algorithms with applications in tomographic image reconstruction, Universitatsbibliothek Heidelberg, 2008. https://doi.org/10.11588/heidok.00008798
    [20] M. R. Hestenes, E. Stiefel, Methods of conjugate Gradients for solving linear systems, J. Res. Natl. Bureau Stand., 49 (1952), 409–436. https://doi.org/10.6028/jres.049.044 doi: 10.6028/jres.049.044
  • This article has been cited by:

    1. M. A. El‐Shorbagy, Trust region based chaotic search for solving multi‐objective optimization problems, 2024, 0266-4720, 10.1111/exsy.13705
    2. Bothina El-Sobky, Yousria Abo-Elnaga, Gehan Ashry, A nonmonotone trust region technique with active-set and interior-point methods to solve nonlinearly constrained optimization problems, 2025, 10, 2473-6988, 2509, 10.3934/math.2025117
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(131) PDF downloads(29) Cited by(0)

Figures and Tables

Figures(6)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog