Processing math: 79%
Research article

A nonmonton active interior point trust region algorithm based on CHKS smoothing function for solving nonlinear bilevel programming problems

  • Received: 30 October 2023 Revised: 27 December 2023 Accepted: 29 December 2023 Published: 06 February 2024
  • MSC : 49N35, 49N10, 93D52, 93D22, 65K05

  • In this paper, an approach is suggested to solve nonlinear bilevel programming (NBLP) problems. In the suggested method, we convert the NBLP problem into a standard nonlinear programming problem with complementary constraints by applying the Karush-Kuhn-Tucker condition to the lower-level problem. By using the Chen-Harker-Kanzow-Smale (CHKS) smoothing function, the nonlinear programming problem is successively smoothed. A nonmonton active interior-point trust-region algorithm is introduced to solve the smoothed nonlinear programming problem to obtain an approximately optimal solution to the NBLP problem. Results from simulations on several benchmark problems and a real-world case about a watershed trading decision-making problem show how the effectiveness of the suggested approach in NBLP solution development.

    Citation: B. El-Sobky, Y. Abo-Elnaga, G. Ashry, M. Zidan. A nonmonton active interior point trust region algorithm based on CHKS smoothing function for solving nonlinear bilevel programming problems[J]. AIMS Mathematics, 2024, 9(3): 6528-6554. doi: 10.3934/math.2024318

    Related Papers:

    [1] B. El-Sobky, G. Ashry, Y. Abo-Elnaga . An active-set with barrier method and trust-region mechanism to solve a nonlinear Bilevel programming problem. AIMS Mathematics, 2022, 7(9): 16112-16146. doi: 10.3934/math.2022882
    [2] B. El-Sobky, G. Ashry . An interior-point trust-region algorithm to solve a nonlinear bilevel programming problem. AIMS Mathematics, 2022, 7(4): 5534-5562. doi: 10.3934/math.2022307
    [3] B. El-Sobky, M. F. Zidan . A trust-region based an active-set interior-point algorithm for fuzzy continuous Static Games. AIMS Mathematics, 2023, 8(6): 13706-13724. doi: 10.3934/math.2023696
    [4] Habibe Sadeghi, Fatemeh Moslemi . A multiple objective programming approach to linear bilevel multi-follower programming. AIMS Mathematics, 2019, 4(3): 763-778. doi: 10.3934/math.2019.3.763
    [5] Bothina El-Sobky, Yousria Abo-Elnaga, Gehan Ashry . A nonmonotone trust region technique with active-set and interior-point methods to solve nonlinearly constrained optimization problems. AIMS Mathematics, 2025, 10(2): 2509-2540. doi: 10.3934/math.2025117
    [6] Li Dong, Jingyong Tang . New convergence analysis of a class of smoothing Newton-type methods for second-order cone complementarity problem. AIMS Mathematics, 2022, 7(9): 17612-17627. doi: 10.3934/math.2022970
    [7] Yueting Yang, Hongbo Wang, Huijuan Wei, Ziwen Gao, Mingyuan Cao . An adaptive simple model trust region algorithm based on new weak secant equations. AIMS Mathematics, 2024, 9(4): 8497-8515. doi: 10.3934/math.2024413
    [8] Ke Su, Wei Lu, Shaohua Liu . An area-type nonmonotone filter method for nonlinear constrained optimization. AIMS Mathematics, 2022, 7(12): 20441-20460. doi: 10.3934/math.20221120
    [9] Adisak Hanjing, Panadda Thongpaen, Suthep Suantai . A new accelerated algorithm with a linesearch technique for convex bilevel optimization problems with applications. AIMS Mathematics, 2024, 9(8): 22366-22392. doi: 10.3934/math.20241088
    [10] Puntita Sae-jia, Suthep Suantai . A new two-step inertial algorithm for solving convex bilevel optimization problems with application in data classification problems. AIMS Mathematics, 2024, 9(4): 8476-8496. doi: 10.3934/math.2024412
  • In this paper, an approach is suggested to solve nonlinear bilevel programming (NBLP) problems. In the suggested method, we convert the NBLP problem into a standard nonlinear programming problem with complementary constraints by applying the Karush-Kuhn-Tucker condition to the lower-level problem. By using the Chen-Harker-Kanzow-Smale (CHKS) smoothing function, the nonlinear programming problem is successively smoothed. A nonmonton active interior-point trust-region algorithm is introduced to solve the smoothed nonlinear programming problem to obtain an approximately optimal solution to the NBLP problem. Results from simulations on several benchmark problems and a real-world case about a watershed trading decision-making problem show how the effectiveness of the suggested approach in NBLP solution development.



    A nested optimization problem with two levels in a hierarchy, i.e the upper-level and lower-level decision-making, is known as the bilevel programming problem. Each level has distinct constraints and objective functions. Both of them have their objective functions and constraints. The decision-maker at the upper level always takes the lead, followed by those at the lower level. The objective function and constraint of the upper-level programming depend on their decision variables and the optimum solution of the lower-level programming. The decision maker at the lower level must maximize its objective function by using the variables provided by the decision maker at the upper level, who in turn chooses the variables after having full knowledge of the lower level's potential responses. Bilevel mathematical programming stresses the system's non-cooperative nature, in contrast to many objective mathematical programming methodologies. This hierarchical model has several applications, including resource allocation, decentralized control, and network design issues. The successful application of this hierarchical model depends on how well it is solved when handling realistic complications, etc. [3]. Bilevel mathematical programming has attracted a lot of attention, and many effective algorithms have been presented. There are currently several methods for solving NBLP problems and they can be divided into four categories: the Karush-Kuhn-Tucker condition approach [1,15,17,19,21], the penalty function approach [24,29], the descent approach [22,36], and the evolutionary approach [41].

    In this paper, a class of NBLP problems is reduced to a traditional nonlinear programming problem with complementary constraints. The lower-level problem is then substituted by its Karush-Kuhn-Tucker optimality conditions. The CHKS smoothing function is then used to smooth it.

    The smoothed nonlinear programming problem (SNLP) is solved by using the nonmonton active interior-point trust-region technique to obtain an approximately optimal solution to the nonlinear bilevel programming problem. In the nonmonton active interior-point trust-region technique, the smoothed nonlinear programming problem is transformed into an equality-constrained optimization problem with bounded variables by using an active-set approach and the penalty method; for more details see [12,13,14,15,18,19]. To solve the equality- constrained optimization problem with bounded variables, Newton's interior-point approach [6] is used. Because Newton's interior-point method is a local method, it might not converge if the starting point is far from a stationary point. A trust-region strategy is used to treat this problem and ensure convergence to the stationary point from any starting point. A trust-region technique can induce strong global convergence and it is a very important method for solving unconstrained and constrained optimization problems; see [10,11,12,13,14,16,17,18,19]. One advantage of the trust-region technique is that it does not require the models objective function to be convex. However, in the traditional trust-region strategy, we must use some criteria to determine whether the trial step is acceptable after solving a trust-region subproblem. Otherwise, the trust-region radius needs to be reduced. A method for calculating the trust-region radius Δk at each iteration is an important part of trust-region techniques. The standard trust-region strategy is predicated on the objective function and the model agreement. The trust region's radius is updated by paying attention to the ratio tk=AredkPredk where Aredk refers to the actual reduction and Predk refers to the predicted reduction. It can be deduced that whenever tk is close to 1, there will be a good agreement between the model and the objective function over a current trust region. It is well-known that the standard trust-region radius Δk is independent of the gradient and Hessian of the objective function, so we are not able to know if the radius Δk is convenient for the whole implementation. This condition might lead to an increase in the number of subproblems that must be resolved in the method's inner phases which reduces the method's efficiency.

    To overcome this problem many authors proposed various nonmonotone trust-region methods, for example, see [9,31,37,38,43,44]. Motivated by the nonmonotone trust-region strategy in [31], we use it in our method. It is generally promising and efficient, and it can overcome the aforementioned shortcomings.

    Furthermore, the usefulness of the CHKS smoothing function with the nonmonton active interior-point trust-region algorithm to solve the NBLP problems was examined by using several benchmark problems and a real-world case about a watershed trading decision-making problem. Numerical experiments show that the suggested method surpasses rival algorithms in terms of efficacy.

    This paper is organized as follows: Section 2 introduces the mathematical formulation of the NBLP problem, basic definitions of the CHKS smoothing functions, and the smoothing method for the nonlinear complementarity problem to obtain the SNLP problem. Section 3 introduces the nonmonton active interior-point trust region algorithm for solving the SNLP problem. Results from simulations on several benchmark problems and a real-world case about a watershed trading decision-making problem are reported in Section 4. The conclusion is given in Section 5.

    In this paper, we will consider the following NBLP problem

    minauybufu(y,z)s.t.cu(y,z)0,minalzblfl(y,z),s.t.cl(y,z)0, (2.1)

    where yn1 and zn2. The functions fu:n1+n2, fl:n1+n2, cu:n1+n2m1, and cl:n1+n2m2 are assumed to be at least twice continuously differentiable function in our method.

    The NBLP problem (2.1) was reduced to the following one-objective optimization problem using Karush-Kuhn-Tucker optimality assumptions for the lower level problem:

    miny,zfu(y,z)s.t.cu(y,z)0,zfl(y,z)+zcl(y,z)λ=0,cl(y,z)0,λjclj(y,z)=0,j=1,...,m2,λj0,j=1,...,m2,auybu,alzbl, (2.2)

    where λm2 is a multiplier vector associated with the inequality constraint cl(y,z). Problem (2.2) with the nonlinear complementarity condition is non-convex and non-differentiable; moreover, the regularity assumptions required to handle smooth optimization problems are never satisfied and it is not good to use our approach to solve problem (2.2). Due to this, we use the CHKS smoothing function to overcome this problem, see [5,26,35].

    Definition 2.1. The Chen-Mangasarian smoothing function is represented by the notation ˆϕ(g,h):2 and defined by the expression ˆϕ(g,h)=g+h(gh)2. By introducing a smoothing parameter ˜ϵR into the the smoothing function ˆϕ(g,h), we obtain the CHKS smoothing function

    ˆϕ(g,h,˜ϵ)=g+h(gh)2+4˜ϵ2. (2.3)

    The Chen-Mangasarian smoothing function has the property ˆϕ(g,h)=0 if and only if g0, h0, gh=0 but it is non-differentiable at g=h=0. But, the CHKS smoothing function has the property ˆϕ(g,h,˜ϵ)=0 if and only if g0, h0, and gh=˜ϵ2 for ˜ϵ0, and the function is smoothing for g, h, and ˜ϵ0.

    Consequently, by using the CHKS smoothing function (2.3), the problem (2.2) can be approximated as follows:

    miny,zfu(y,z)s.t.cu(y,z)0,zfl(y,z)+zcl(y,z)λ=0,λjclj(λj+clj)2+4˜ϵ2=0,j=1,...,m2,auybu,alzbl. (2.4)

    The above smoothed nonlinear programming problem can be summarized as follows

    minxfu(x)s.t.cu(x)0,ce(x)=0,βaxβb, (2.5)

    where x=(y,z,λ)Tn where n=n1+n2+m2, ce(x)=[zfl(y,z)+zcl(y,z)λ,λjclj(λj+clj)2+4˜ϵ2], j=1,...,m2. The functions fu(x):n, cu(x):nm1, and ce(x):nn2+m2 are twice continuously differentiable and m1<n. We denote the feasible set E={x:βaxβb} and the strict interior feasible set int(E)={x:βa<x<βb} where βa{{}}n, βb{{}}n, and βa<βb.

    Several methods that have been suggested to solve the smoothed nonlinear programming problem (2.5), see [11,12,16,17,19]. The nonmonton active interior-point trust-region algorithm is proposed in this paper to solve problem (2.5) and a detailed description of this algorithm is clarified in the following section.

    In this section, firstly, we will offer a detailed description of the active-set strategy with the penalty method to convert problem (2.5) to an equality-constrained optimization problem with bounded variables. Second, the basic steps for using Newton's interior-point method to solve the equality-constrained optimization problem are presented clearly. Thirdly, the main steps for the nonmontone trust-region algorithm are presented. Finally, the main steps for solving problem 2.1 are introduced.

    Motivated by the active-set strategy proposed in [8] and used in [10,11,12,13,14], we define a 0-1 diagonal matrix D(x)m1×m1, whose diagonal entries are defined as follows:

    di(x)={1if cui(x)0,0if cui(x)<0. (3.1)

    Problem (2.5) is transformed into the following equality-constrained optimization problem using the previous matrix.

    minxfu(x)s.t.ce(x)=0,cu(x)TD(x)cu(x)=0,βaxβb.

    The previous problem is transformed into the following problem by using a penalty method

    minxfu(x)+σu2D(x)cu(x)2s.t.ce(x)=0,βaxβb, (3.2)

    where σu is a positive parameter.

    Motivated by the interior point method in [6], we let L(x,μe,λa,λb) be a Lagrangian function associated with problem (3.2) and it is defined as follows

    L(x,μe,λa,λb)=(x,μe;σu)λTa(xβa)λTb(βbx), (3.3)

    where

    (x,μe)=fu(x)+μTece(x), (3.4)

    and

    (x,μe;σu)=(x,μe)+σu2D(x)cu(x)2, (3.5)

    such that μe, λa, and λb represent the Lagrange multiplier vectors associated with the equality constraint ce(x) and inequality constraints (xβa) and (βbx) respectively. A point xE will be a local minimizer of problem (3.2) if there exists multiplier vectors μem1, λan+, and λbn+ such that (x,μe,λa,λb) satisfies the following Karush-Kuhn-Tucker conditions,

    x(x,μe;σe)λa+λb=0, (3.6)
    ce(x)=0, (3.7)
    λa(j)(x(j)βa(j))=0, (3.8)
    λb(j)(βb(j)x(j))=0, (3.9)

    where

    x(x,μe;σu)=x(x,μe)+σucu(x)D(x)cu(x), (3.10)

    and

    x(x,μe)=fu(x)+ce(x)μe. (3.11)

    Let V(x) be a diagonal matrix whose diagonal elements are as follows:

    v(j)(x)={(x(j)βa(j)),if (x(x,μe;σu))(j)0 and βa(j)>,(βb(j)x(j)),if (x(x,μe;σu))(j)<0 and βb(j)<+,1,otherwise. (3.12)

    For more details see [11,12,20]. Using the scaling matrix V(x), conditions (3.6)–(3.9) are equivalent to the following nonlinear system,

    V2(x)x(x,μe;σu)=0, (3.13)
    ce(x)=0. (3.14)

    For the following reasons, the nonlinear systems (3.13) and (3.14) is continuous but not everywhere differentiable.

    ● It may be non-differentiable when v(j)=0. To overcome this problem, restricting xint(E).

    ● It may be non-differentiable when v(j) has an infinite upper bound and a finite lower bound, and (x(x,μe;σu))(j)=0. To overcome this problem, we define a vector θ(x) whose components θ(j)(x)=((v(j))2)x(j), j=1,...,n are defined as follows

    θ(j)(x)={1,if (x(x,μe;σu))(j)0 and βa(j)>,1,if (x(x,μe;σu))(j)<0 and βb(j)<+,0,otherwise. (3.15)

    If we use Newton's method to solve the nonlinear systems (3.13) and (3.14), we get

    [V2(x)2x(x,μe;σu)+diag(x(x,μe;σu))diag(θ(x))]Δx+V2(x)ce(x)Δμe=V2(x)x(x,μe;σu), (3.16)
    ce(x)TΔx=ce(x), (3.17)

    where

    2x(x,μe;σu)=H+σucu(x)D(x)cu(x)T, (3.18)

    and H is the Hessian of the Lagrangian function (3.4) or an approximation to it.

    Restricting xint(E) is necessary to ensure that the matrix V(x) is nonsingular. Therefore, putting Δx=V(x)s in both Eqs (3.16) and (3.17) and multiplying both sides of equation (3.16) by V1(x), we get

    [V(x)2x(x,μe;σu)V(x)+diag(x(x,μe;σu))diag(θ(x))]s+V(x)ce(x)Δμe=V(x)x(x,μe;σu), (3.19)
    (V(x)ce(x))Ts=ce(x). (3.20)

    It should be noted that the step sk produced by the systems (3.19) and (3.20) is equivalent to the step produced by resolving the following quadratic programming subproblem,

    minx(x,μe;σu)+V(x)x(x,μe;σu)Ts+12sTBss.t.ce(x)+V(x)ce(x)Ts=0, (3.21)

    where

    B=G(x)+σuV(x)cu(x)D(x)cu(x)TV(x), (3.22)

    and

    G(x)=V(x)H(x)V(x)+diag(x(x,μe;σu))diag(θ(x)). (3.23)

    While Newton's approach has the advantage of being quadratically convergent under reasonable assumptions, it also has the disadvantage of requiring that the starting point be close to the solution. The nonmonotone trust-region globalization approach is used to ensure convergence from any starting point. The nonmonotone trust-region globalization strategy is a crucial method for solving a smooth nonlinear unconstrained or constrained optimization problem that can produce substantial global convergence. In the following section, we present the main steps of the nonmontone trust-region algorithm to solve quadratic subproblem (3.21).

    The main steps of the process to apply the nonmontone trust-region algorithm to solve the quadratic subproblem (3.21) are described in the section that follows.

    The trust-region subproblem associated with the problem (3.21) is given by

    minx(x,μe;σu)+V(x)x(x,μe;σu)Ts+12sTBss.t.ce(x)+V(x)ce(x)Ts=0,sδk, (3.24)

    where δk is the radius of the trust-region.

    Due to the possibility of no intersecting points existing between the hyperplane of the linearized constraints ce(x)+V(x)ce(x)Ts=0 and the constraint sδk, subproblem (3.26) may be infeasible. There is no guarantee that this will remain true even if they do intersect if δk is reduced; see [7]. To solve this problem, we used a reduced Hessian strategy. This strategy was suggested in [2,32] and subsequently implemented in [12,13,16,17,18,20]. This strategy divides the step sk into two orthogonal components: the normal component snk for improving feasibility and the tangential component stk for improving optimality. That is, sk=snk+stk and stk=˜Zkˉstk where ˜Zk is a matrix whose columns form a basis for the null space of (Vkcek)T.

    To compute snk

    Obtaining the normal component snk by solving the following trust-region subproblem

    minx12cek+VkcekTsn2s.t.snζδk, (3.25)

    for some ζ(0,1).

    A conjugate gradient method [34] which is very cheap if the problem is large-scale and the Hessian is indefinite, is used to compute the normal component snk. The main steps involved in applying the conjugate gradient method to solve subproblem (3.25) are presented in the following algorithm.

    Algorithm 3.1. : (A conjugate gradient method to calculate snk)

    Step 1. Set sn0=0, rn0=Vkcekcek and pn0=rn0; pick ϵ>0.

    Step 2. For i = 0, 1, .... do

    Compute γni=rniTrnipniTVkcekcTekVkpni.

    Compute τni such that sni+τnpni=δk.

    If γni0, or if γni>τni, then set snk=sni+τnipni and stop.

    Otherwise set sni+1=sni+γnipni and

    rni+1=rniγniVkcekcTekVkpni.

    If rni+1rn0ϵ0,

    set snk=sni+1 and stop.

    Compute ˆβi=rni+1Trni+1rniTrni, and the new direction pni+1=rni+1+ˆβipni.

    Given snk, let q(Vksk) be the quadratic form of the function (3.5) and let it be defined as follows

    q(Vksk)=(xk,μek;σuk)+Vx(xk,μek;σuk)Ts+12sTkBksk. (3.26)

    To compute stk

    To compute the tangential component stk=˜Zkˉstk, solve the following trust-region subproblem

    minx[˜ZTkqk(Vksnk)+Bksnk]Tˉst+12ˉstT˜ZTkBk˜Zkˉsts.t.˜ZkˉstΔk, (3.27)

    where qk(Vksnk)=Vkx(xk,μek;σuk)+Bksnk and Δk=δ2ksnk2. The main steps involved in applying the conjugate gradient method to solve subproblem (3.27) are presented in the following algorithm.

    Algorithm 3.2. (A conjugate gradient method to compute stk)

    Step 1. Set ˉst0=snk, rt0=˜ZTkqk(Vksnk)+Bksnk, pt0=rt0.

    Step 2. For i = 0, 1, .... do

    Compute γti=rTtirtipTti˜ZTkBk˜Zkpti.

    Compute τti such that ˉsti+τtpti=Δk.

    If γti0, or if γti>τti, then set ˉstk=ˉsti+τtipti and stop.

    Otherwise set ˉsti+1=ˉsti+γtipti and

    rti+1=rtiγti˜ZTkBk˜Zkpti.

    If rti+1rt0ϵ,

    set ˉstk=ˉsti+1 and stop.

    Compute ˉβi=rTti+1rti+1rTtirti, and pti+1=rti+1+ˉβipti.

    After computing sk, we set xk+1=xk+Vksk. To ensure that the matrix Vk is nonsingular, we need to guarantee that xk+1intE. So, the damping parameter φk is required at every iteration k.

    To obtain the damping parameter φk

    The following algorithm clarifies the fundamental steps required to obtain the damping parameter φk.

    Algorithm 3.3. (Damping parameter φk)

    Step 1. Compute the parameter ωk as follows

    ω(i)k={β(i)ax(i)kV(i)ks(i)k,ifβ(i)a>andV(i)ks(i)k<01,otherwise.

    Step 2. Compute the parameter υk as follows

    υ(i)k={β(i)bx(i)kV(i)ks(i)k,ifβ(i)b<andV(i)ks(i)k>01,otherwise.

    Step 3. Compute the damping parameter φk as follows

    φk=min{1,{mini{ω(i)k,υ(i)k}}. (3.28)

    Step 4. Set xk+1=xk+φkVksk.

    To determine whether the scaled step φkVksk will be accepted or no, we need a merit function that connects the objective function and the constraints such that progress in the merit function equates to progress in solving the problem. The augmented Lagrangian function that follows is used as a merit function,

    Φ(x,μe;σu;σe)=fu(x)+μTece(x)+σu2D(x)cu(x)2+σece(x)2, (3.29)

    where σe is the penalty parameter.

    To test the scaled step, we use the following scheme to determine the Lagrange multiplier vector μek+1,

    minfuk+1+cek+1μe+σukcuk+1Dk+1cuk+12. (3.30)

    The following actual reduction and the predicted reduction must be defined to determine if the point (xk+1,μek+1) will be accepted in the next iteration or no. The actual reduction in the merit function in moving from (xk,μek) to (xk+φkVksk,μek+1) is defined as

    Aredk=Φ(xk,μek;σuk;σek)Φ(xk+φkVksk,μek+1;σuk;σek).

    The predicted reduction Predk in the merit function (3.29) is defined as follows

    Predk=qk(0)qk(φkVksk)+σek[cek2cek+cTekφkVksk2], (3.31)

    where

    qk(Vkφksk)=(xk,μek)+x(xk,μek)TφkVksk+12φ2ksTkGksk+σuk2Dk(cuk+cTukφkVksk)2. (3.32)

    To ensure that Predk0, the penalty parameter σek must be updated.

    To update the penalty parameter σek

    To ensure that Predk0, we need to update the penalty parameter σek by using the following algorithm.

    Algorithm 3.4. (Process to update σek)

    If

    Predkσek2[cek2cek+cTekφkVksk2], (3.33)

    set

    σek=2[qk(φkVksk)qk(0)]cek2cek+cTekφkVksk2+˜b0, (3.34)

    where ˜b0>0 is a small fixed constant.

    Else, set σek+1=σek.

    End if.

    For more details, see [12].

    To test φkVksk and update δk

    Motivated by the nonmonotone trust-region strategy in [31], we define

    tk=CkΦ(xk+φkVksk,μek+1;σuk;σek)Predk (3.35)

    where

    Ck={Φ(xk;μek;σuk;σek),if k = 0 ,ξk1Qk1Ck1+Φ(xk+φkVksk,μek+1;σuk;σek)Qk,if k1, (3.36)

    and

    Qk={1,if k = 0 ,ξk1Qk1+1,if k1, (3.37)

    such that 0ξminξk1ξmax1 and

    ξk={0.5ξ0,if k = 1 ,0.5(ξk1+ξk2),if k2. (3.38)

    The procedure below introduces the trial step φkVksk for testing and updates the radius δk.

    Algorithm 3.5. (Test φkVksk and update δk)

    Step 0. Choose 0<α1<α21, δmax>δmin, and 0<ζ1<1<ζ2.

    Step 1. (Compute tk)

    Compute ξk by using (3.38).

    Compute Qk as defined in (3.37).

    Compute Ck as defined in (3.36).

    Compute tk=CkΦ(xk+φkVksk,μek+1;σuk;σek)Predk.

    Step 2. (Update the trial step sk)

    While tk<α1, or Predk0,

    set δk=ζ1sk.

    To evaluate a new step sk, go to Algorithms (3.1) and (3.2).

    Step 3. (Update δ)

    If α1tk<α2, then

    set xk+1=xk+φkVksk.

    δk+1=max(δmin,δk).

    End if.

    If tkα2, then

    set xk+1=xk+φkVksk.

    δk+1=min{max{δmin,ζ2δk},δmax}.

    End if.

    To update the parameter σuk

    To update σuk, we use a scheme proposed in [39]. Let a tangential predicted decrease Tpred which is obtained by the tangential component stk be given by

    Tpredk(ˉstk)=qk(Vksnk)qk(Vk(snk+˜Zkˉstk)), (3.39)

    the main steps used to update the parameter σuk is clarified by the following algorithm.

    Algorithm 3.6. (Updating σuk)

    Step 1. Set σu0=1.

    Step 2. If

    TpredkVkcukDkcukmin{VkcukDkcuk,δk}, (3.40)

    then σuk+1=σuk,

    else, σuk+1=2σuk.

    End if

    Finally, the nonmontone trust-region algorithm is terminated when either

    ˜ZTkVkx(xk,μek)+VkcukDkcuk+cekε1

    or skε2, for some ε1,ε2>0.

    Nonmonotone trust-region algorithm

    In the algorithm that follows, we will outline the main steps of the nonmonotone trust-region algorithm in order to solve subproblem (3.21).

    Algorithm 3.7. (The nonmonotone trust-region algorithm)

    Step 0. Start with x0intE. Compute D0, V0, μe0, and θ0. Set σu0=1, σe0=1, and ˜b0=0.1.

    Choose ε1>0 and ε2>0. Choose δmin, δmax, and δ0 such that δminδ0δmax.

    Choose α1, α2, ζ1, and ζ2 such that 0<ζ1<1<ζ2, and 0<α1<α2<1. Set k=0.

    Step 1. (Termination)

    If ˜ZTkVkx(xk,μek)+VkcukDkcuk+cekε1, then stop the algorithm.

    Step 2. (Computing step Vkφksk)

    Evaluate the normal component step snk by using Algorithm (3.1).

    Evaluate the tangential step ˉstk by using Algorithm (3.2).

    Set sk=snk+˜Zkˉstk.

    If skε2, then stop the algorithm.

    Else, compute the parameter φk by using (3.28).

    Set xk+1=xk+Vkφksk.

    End if.

    Step 3. Evaluate Dk+1 which is defined by (3.1).

    Step 4. (Updating the multiplier vector μek+1)

    Computing the Lagrange multiplier vector μek+1 by using (3.30).

    Step 5. (Updating the penalty parameter σe)

    Using Algorithm (3.4) to update the penalty parameter σek.

    Step 6. Compute tk as defined in (3.35) by using both Qk as defined in (3.37) and Ck as defined in (3.36).

    Using Algorithm (3.5) to test the scaling step φkVksk and update the radius of the trust-region.

    Step 7. Updating the parameter σuk using Algorithm (3.6).

    Step 8. Utilize (3.12) to evaluate the matrix Vk+1.

    Step 9. Set k=k+1 and go to Step 1.

    The global convergence theory for Algorithm (3.7) is similar to the global convergence theory of the algorithm presented in ([12]) when used to solve a general nonlinear programming problem.

    We will outline the main steps of the nonmontone active-set interior-point trust-region algorithm based on the CHKS smoothing function to resolve problem (2.1) in the following algorithm.

    Algorithm 3.8. (CHKS nonmontone active-set interior-point trust-region algorithm)

    Step 1. Reduce the nonlinear bilevel programming problem (2.1) to the one-objective optimization problem (2.2) using Karush-Kuhn-Tucker optimality assumptions for the lower level problem.

    Step 2. Use the CHKS smoothing function to convert non-deferential problem (2.2) to smoothing problem (2.4).

    Step 3. Utilize an active-set strategy to convert the nonlinearly constrained optimization problem (2.4) to the equality constrained optimization problem with bounded variables (3.2).

    Step 4. Utilize an interior-point method with the diagonal scaling matrix V(x) given in (3.12) to obtain the nonlinear system [(3.13) - (3.14)].

    Step 5. Utilize Newton's method with diagonal matrix θ as defined in (3.15), to solve the nonlinear system [(3.13)-(3.14)] and obtain the equivalent subproblem (3.21).

    Step 6. Solve subproblem (3.21) by using nonmonton trust-region given by Algorithm (3.7).

    In this section, we introduce an extensive variety of possible numeric NBLP problems to illustrate the validity of the proposed Algorithm (3.8) to solve problem (2.1).

    We shall introduced the MATLAB numerical results for Algorithm (3.8) with a starting point x0int(E). The parameter setting that follows was utilized: α1=102, α2=0.75, ζ1=0.5, ζ2=2, δmin=103, δ0=max(scp0,δmin), δmax=103δ0, ε1=1010, and ε2=1012.

    To demonstrate the efficacy of the suggested Algorithm (3.8) in obtaining the solution to NBLP problem (2.1), we offer a wide variety of possible numeric NBLP problems in this section. Fifteen benchmark instances from [4,22,27,30,33] were used to test the proposed Algorithm (3.8).

    To check the consistency of the results, 10 independent runs using different initial starting points were carried out for each test example. Table 1 summarizes the statistical data for all examples and demonstrates that the proposed Algorithm (3.8) results are approximate or equal to those of the compared algorithms in the method proposed in [18] and the literature.

    Table 1.  Comparison of the results of Algorithm (3.8) with those of various existing methods(ref).
    Problem (y,z) fu (y,z) fu (y,z) fu
    fl fl fl
    name Algorithm (3.8) Algorithm (3.8) Method [18] Method [18] Ref. Ref.
    TP1 (0.8438, 0.7657, 0) -2.0769 (.8465, 0.7695, 0) -2.0772 (0.8438, 0.7657, 0) -2.0769
    -0.5863 -0.5919 -0.5863
    TP2 (0.6111, 0.3890, 0, 0.64013 (0.6111, 0.3890, 0, 0.64013 (0.609, 0.391, 0, 0.6426
    0, 1.8339) 1.6816 0, 1.8339) 1.6816 0, 1.828) 1.6708
    TP3 (0.97, 3.14, -8.92 (0.97, 3.14 -8.92 (0.97, 3.14, -8.92
    2.6, 1.8) -6.05 2.6, 1.8) -6.05 2.6, 1.8) -6.05
    TP4 (.5, .5, .5, .5) -1 (0.5, 0.5, 0.5, 0.5) -1 (0.5, 0.5, 0.5, 0.5) -1
    0 0 0
    TP5 (9.9998, 9.9998) 99.9996 (9.9953, 9.9955) 99.907 (10.03, 9.969) 100.58
    3.7160e-07 1.8628e-04 0.001
    TP6 (1.8884, 0.89041, 0) -1.2067 (1.8889, 0.88889, -1.4074 NA 3.57
    7.6062 6.8157e-06) 7.6172 2.4
    TP7 (1, 0) 17 (1, 0) 17 (1, 0) 17
    1 1 1
    TP8 (0.75, 0.75, -2.25 (0.7513, 0.7513, -2.2480 (3/2, 3/2, -2.1962
    0.75, 0.75) 0 0.752, 0.752) 0 3/2, 3/2) 0
    TP9 (11.25, 5) 2250 (11.25, 5) 2250 (11.25, 5) 2250
    197.753 197.753 197.753
    TP10 (1, 0, 0) 0 (1, 0, 1) 1 (1, 0, 1) 1
    0 -1 -1
    TP11 (25, 30, 5.0024 (25, 30, 5, 10) 5 (25, 30, 5, 10) 5
    4.9996, 10) 1.5000e-06 0 0
    TP12 (3, 5) 9 (3, 5) 9 (3, 5) 9
    0 0 0
    TP13 (0, 2, 1.875, -18.679 (0, 2, 1.875, 0.9063) -12.68 (0, 2, 1.875, 0.9063) -12.68
    0.90625) -1.0156 -1.016 -1.016
    TP14 (10.016, 0.81967) 81.328 (10, 0.011) 8.1978e+01 (10.04, 0.1429) 82.44
    -0.33593 0 0.271
    TP15 (0, 0.9, 0, 0.6, 0.4) -29.2 (0, 0.9, 0, 0.6, 0.4) -29.2 (0, 0.9, 0, 0.6, 0.4) -29.2
    3.2 3.2 3.2

     | Show Table
    DownLoad: CSV

    For purposes of comparison, in Table 2 we have provided the corresponding results of the average number of iterations (iter), the average number of function evaluations (nfunc), and the average amount of CPU time (CPUs) in seconds as obtained via the method in [18]. The results in Table 2 demonstrate that the proposed Algorithm (3.8) results are approximative or the best of those obtained via the comparative algorithms in the literature. Results show that our proposed method is capable of handling the NBLP problem (2.1) whether the upper or lower levels are convex or not, and the calculated results converge to the optimal solution that is approximate or equal to the optimal solution stated in the literature. Finally, it is evident from a comparison of the solutions obtained via the proposed Algorithm (3.8) and those found in the literature that the proposed Algorithm (3.8) is capable of finding the best solution to some problems under conditions of a small amount of time, fewer function evaluations, and fewer iterations.

    Table 2.  Comparison of the results of Algorithm (3.8) with those of the method in [18], with respect to the number of iterations, the number of function evaluations, and time.
    Problem iter nfunc CPU(s) iter nfunc CPUs
    name Algorithm (3.8) Algorithm (3.8) Algorithm (3.8) Method [18] Method [18] Method [18]
    TP1 10 11 1.54 10 13 1.62
    TP2 8 9 1.78 9 12 1.87
    TP3 6 8 2.1 7 8 2.52
    TP4 10 12 1.76 12 13 1.92
    TP5 6 8 1.65 6 7 1.523
    TP6 7 9 3.5 8 10 3.95
    TP7 11 13 1.8 11 12 1.652
    TP8 4 5 1.569 11 12 0.953
    TP9 9 11 2.23 8 10 1.87
    TP10 4 7 2.887 5 6 3.31
    TP11 9 11 3.542 10 13 3.632
    TP12 4 5 1.12 7 9 1.33
    TP13 5 7 2.1 5 8 1.998
    TP14 5 6 1.87 5 6 1.97
    TP15 5 6 20.212 6 7 20.125

     | Show Table
    DownLoad: CSV

    Test Problem 1 [28]:

    miny1fu=z21+z22+y214y1s.t.0y12,minz1,z2fl=z21+0.5z22+z1z2+(13y1)z1+(1+y1)z2,s.t.2z1+z22y11,z10,z20.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 1 is converted to the following SNLP problem

    miny1,z1,z2,λfu=z21+z22+y214y1s.t.2z1+z2+(13y1)+(1+y1)+2λ=0,z1+z2+(1+y1)+λ=0,λ(2z1+z22y11)(λ+2z1+z22y11)2+4˜ϵ2=0,0y12,z10,z20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,z1,z2)=(0.8438,0.7657,0), fu=2.0769, and fl=0.5863.

    Test Problem 2 [28]:

    miny1,y2fu=z21+z23z1z34z27y1+4y2s.t.y1+y21,y10,y20minz1,z2,z3fl=z21+0.5z22+0.5z23+z1z2+(13y1)z1+(1+y2)z2,s.t.2z1+z2z3+y12y2+20,z10;z20z30.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 2 is converted to the following SNLP problem

    miny1,y2,z1,z2,z3,λfu=z21+z23z1z34z27y1+4y2s.t.2z1+z2+(13y1)+2λ=0,z1+z2+(1+y2)+λ=0,z3λ=0,λ(2z1+z2z3+y12y2+2)(λ+2z1+z2z3+y12y2+2)2+4˜ϵ2=0,y1+y21,y10;y20z10;z20z30.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,y2,z1,z2,z3)=(0.6111,0.3890,0,0,1.8339), fu=0.64013, and fl=1.6816.

    Test Problem 3 [28]:

    miny1,y2fu=0.1(y21+y22)3z14z2+0.5(z21+z22)s.t.minz1,z2fl=0.5(z21+5z22)2z1z2y1z1y2z2,s.t.0.333z1+z220,z10.333z220,z10,z20.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 3 is converted to the following SNLP problem

    miny1,y2,z1,z2,λ1,λ2fu=0.1(y21+y22)3z14z2+0.5(z21+z22)s.t.z12z2y10.333λ1+λ2=0,5z22z1y2+λ10.333λ2=0,λ1(0.333z1+z22)(λ10.333z1+z22)2+4˜ϵ2=0,λ2(z10.333z22)(λ2+z10.333z22)2+4˜ϵ2=0,y10,y20,z10,z20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,y2,z1,z2)=(0.97,3.14,2.6,1.8), fu=8.92, and fl=6.05.

    Test Problem 4 [28]:

    miny1,y2fu=y212y1+y222y2+z21+z22S.Ty10,y20minz1,z2fl=(z1y1)2+(z2y2)2,s.t.0.5z11.5,0.5z21.5.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 4 is converted to the following SNLP problem

    miny1,y2,z1,z2fu=y212y1+y222y2+z21+z22s.t2(z1y1)=0,2(z2y2)=0,0.5z11.5,0.5z21.5,y10,y20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,y2,z1,z2)=(0.5,0.5,0.5,0.5), fu=1, and fl=0.

    Test Problem 5 [28]:

    minyfu=y2+(z10)2s.t.y+z0,0y15,minzfl=(y+2z30)2,s.t.y+z20,0z20.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 5 is converted to the following SNLP problem

    miny,z,λfu=y2+(z10)2s.t4(y+2z30)+λ=0,λ(y+z20)(λ+y+z20)2+4˜ϵ2=0,y+z0,0y15,0z20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y,z)=(9.9998,9.9998), fu=99.9996, and fl=3.7160e07.

    Test Problem 6 [28]:

    miny1fu=(y11)2+2z212y1s.t.y10,minz1,z2fl=(2z14)2+(2z21)2+y1z1,s.t.4y1+5z1+4z212,4y15z1+4z24,4y14z1+5z24,4y1+4z1+5z24,z10,z20.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 6 is converted to the following SNLP problem

    miny1,z1,z2,λ1,λ2,λ3,λ4fu=(y11)2+2z212y1s.t.4(2z14)+y1+5λ15λ24λ3+4λ4=0,4(2z21)+4λ1+4λ2+5λ3+5λ4=0,λ1(4y1+5z1+4z212)(λ1+4y1+5z1+4z212)2+4˜ϵ2=0,λ2(4y15z1+4z2+4)(λ24y15z1+4z2+4)2+4˜ϵ2=0,λ3(4y14z1+5z24)(λ3+4y14z1+5z24)2+4˜ϵ2=0,λ4(4y1+4z1+5z24)(λ44y1+4z1+5z24)2+4˜ϵ2=0,y10,z10,z20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,z1,z2)=(1.8884,0.89041,0), fu=1.2067, and fl=7.6062.

    Test Problem 7 [28]:

    minyfu=(y5)2+(2z+1)2s.t.y0,minzfl=(2z1)21.5yz,s.t.3y+z3,y0.5z4,y+z7,z0.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 7 is converted to the following SNLP problem

    miny,z,λ1,λ2,λ3fu=(y5)2+(2z+1)2s.t.4(2z1)1.5y+λ10.5λ2+λ3=0,λ1(3y+z+3)(λ13y+z+3)2+4˜ϵ2=0,λ2(y0.5z4)(λ2+y0.5z4)2+4˜ϵ2=0,λ3(y+z7)(λ2+y+z7)2+4˜ϵ2=0,y0,z0.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y,z)=(1,0), fu=17, and fl=1.

    Test Problem 8 [28]:

    miny1,y2fu=y213y1+y223y2+z21+z22s.t.y10,y20,minz1,z2fl=(z1y1)2+(z2y2)2,s.t.0.5z11.5,0.5z21.5.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 8 is converted to the following SNLP problem

    miny1,y2,z1,z2fu=y213y1+y223y2+z21+z22s.t.2(z1y1)=0,2(z2y2)=0,0.5z11.5,0.5z21.5,y10,y20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,y2,z1,z2)=(0.75;0.75;0.75;0.75), fu=2.25, and fl=0.

    Test Problem 9 [23]:

    minyfu=16y2+9z2s.t.4y+z0,y0,minzfl=(y+z20)4,s.t.4y+z500,z0.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 9 is converted to the following SNLP problem

    miny,z,λfu=16y2+9z2s.t4(y+z20)3+λ=0,λ(4y+z50)(λ+4y+z50)2+4˜ϵ2=0,4y+z0,y0,z0.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y,z)=(11.25,5), fu=2250, and fl=197.753.

    Test Problem 10 [23]:

    miny1fu=y31z1+z2s.t.0y11,minz1,z2fl=z2s.t.y1z110,z21+y1z21,z20.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 10 is converted to the following SNLP problem

    miny1,z1,z2,λ1,λ2fu=y31z1+z2s.t.λ1y1+2λ2z1=0,1+λ2y1=0,λ1(y1z110)(λ1+y1z110)2+4˜ϵ2=0,λ2(z21+y1z21)(λ2+z21+y1z21)2+4˜ϵ2=0,0y11,z20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,z1,z2)=(1,0,0), fu=0, and fl=0.

    Test Problem 11 [40]:

    miny1,y2fu=2y1+2y23z13z260s.t.y1+y2+z12z240,0y150,0y250,minz1,z2fl=(z1y1+20)2+(z2y2+20)2,s.t.y12z110,y22z210,10z120,10z220.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 11 is converted to the following smooth nonlinear programming problem

    miny1,y2,z1,z2,λ1,λ2fu=2y1+2y23z13z260s.t.2(z1y1+20)+2λ1=0,2(z2y2+20)+2λ2=0,λ1(10y1+2z1)(λ1+10y1+2z1)2+4˜ϵ2=0,λ2(10y2+2z2)(λ2+10y2+2z2)2+4˜ϵ2=0,y1+y2+z12z240,0y150,0y250,10z120,10z220.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,y2,z1,z2)=(25,30,4.9996,10), fu=5.0024, and fl=1.5000e06.

    Test Problem 12 [23]:

    minyfu=(y3)2+(z2)2s.t.2y+z10,y2z+20,y+2z140,0y8,minzfl=(z5)2s.t.z0.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 12 is converted to the following SNLP problem

    miny,zfu=(y3)2+(z2)2s.t.2(z5)=0,2y+z10,y2z+20,y+2z140,0y8,z0.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y,z)=(3,5), fu=9, and fl=0.

    Test Problem 13 [40]:

    miny1,y2fu=y213y224z1+z22s.t.y21+2y24,y10,y20,minz1,z2fl=2y21+z215z2,s.t.y212y1+2y222z1+z23,y2+3z14z24,z10,z20.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 13 is converted to the following SNLP problem.

    miny1,y2,z1,z2,λ1,λ2fu=y213y224z1+z22s.t.2z1+2λ13λ2=0,5λ1+4λ2=0,λ1(y21+2y12y22+2z1z23)(λ1y21+2y12y22+2z1z23)2+4˜ϵ2=0,λ2(y23z1+4z24)(λ2y23z1+4z24)2+4˜ϵ2=0,y21+2y24,y10,y20,z10,z20.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y1,y2,z1,z2)=(0,2,1.875,0.90625), fu=18.679, and fl=1.0156.

    Test Problem 14 [40]:

    minyfu=(y1)2+(z1)2s.t.y0,minzfl=0.5z2+500z50yzs.t.z0.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 14 is converted to the following SNLP problem.

    miny,zfu=(y1)2+(z1)2s.t.z50y+500=0y0,z0.

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y,z)=(10.016,0.81967), fu=81.328, and fl=0.33593.

    Test Problem 15 [40]:

    miny1,y2fu=8y14y2+4z140z24z3s.t.y10,y20minzfl=y1+2y2+z1+z2+2z3,s.t.z2+z3z11,2y1z1+2z20.5z31,2y2+2z1z20.5z31,zi0,i=1,2,3.

    Applying the Karush-Kuhn-Tucker condition to the lower level problem and using the CHKS smoothing function, Test Problem 15 is converted to the following SNLP problem,

    \begin{array}{ll} \underset{y_1,y_2,z_1,z_2,z_3,\lambda_1,\lambda_2,\lambda_3}{\min} & f_u = -8y_1-4y_2+4z_1-40z_2-4z_3\\ s.t. \; & 1-\lambda_1-\lambda_2+2\lambda_3 = 0,\\ \;\;\;\; & 1+\lambda_1+2\lambda_2-\lambda_3 = 0,\\ \;\;\; & 2+\lambda_1-0.5\lambda_2-0.5\lambda_3 = 0,\\ \;\;\;\;\;\; & \lambda_1-( z_2 +z_3-z_1- 1)-\sqrt{(\lambda_1+ z_2 +z_3-z_1- 1)^2+4\tilde{\epsilon}^2} = 0,\\ \;\;\;\;\;\;\; & \lambda_2-( 2y_1-z_1+2z_2-0.5z_3- 1)-\sqrt{(\lambda_2+ 2y_1-z_1+2z_2-0.5z_3- 1)^2+4\tilde{\epsilon}^2} = 0,\\ \;\;\;\;\;\;\; & \lambda_3-( 2y_2+2z_1-z_2-0.5z_3- 1)-\sqrt{(\lambda_3+ 2y_2+2z_1-z_2-0.5z_3- 1)^2+4\tilde{\epsilon}^2} = 0,\\ \;\;\;\;\;\;\; & y_1\geq 0,\;\;\;y_2\geq 0,\;\;z_i \geq 0,\;\;\;i = 1,2,3. \end{array}

    Applying Algorithm (3.7) to the above nonlinear programming problem we have that (y_1^*, y_2^*, z_1^*, z_2^*, z_3^*) = (0, 0.9, 0, 0.6, 0.4) , f_u = -29.2 , and f_l = 3.2 .

    In this section, the efficacy of the proposed Algorithm (3.8) was tested by using a watershed water trading decision-making model based on bilevel programming [42]. The upper decision-maker is the watershed management agency which acts as the planning, controlling and coordinating center of a watershed, and each user is the lower decision-maker. The mathematical formulation for the watershed water trading decision-making model is formulated as follows:

    \begin{array}{ll} \underset{w,t,r_1,g_1,r_2,g_2}{max}& F = 0.4W + t(q_1+q_2)+f_1+f_2,\\ \; \; \; s.t.& r_1+r_2+w = 90,\\ \; \; \; & q_1+q_2+w \leq 90,\\ \; \; \; &g_1+g_2 = 20,\\ \; \; \; & r_1\geq 38,\ r_2\geq 42,\ g_1 \geq 7, \ g_2 \geq 8, \ w \geq 6, \ 0.3 \leq t \leq 2.0,\\ \; \; \; \; \; \; \underset{q_1, l_1}{max } &f_1 = 0.7q_1 -q_1t -0.3(45-q_1)^2 +(r_1-q_1)[0.9-0.01(r_1+r_2-q_1-q_2)],\\ &-0.2(0.2q_1-l_1)^2+(g_1-l_1)[0.8-0.01(g_1+g_2-l_1-l_2)],\\ \; \; \; \; \; \; \; \; \; \underset{q_2, l_2}{max } &f_2 = 0.8q_2 -q_2t -0.2(47-q_2)^2 +(r_2-q_2)[0.9-0.01(r_1+r_2-q_1-q_2)],\\ \; \; \; \; \; \; \; &-0.1(0.3q_2-l_2)^2+(g_2-l_2)[0.8-0.01(g_1+g_2-l_1-l_2)],\\ \; \; \; \; \; \; \; \; \; s.t.& l_1+l_2 \leq 20,\\ \; \; \; \; \; \; \; \; \; & q_1,\ l_1 \geq 0,\\ \; \; \; \; \; \; \; \; \; &q_2, \ l_2\geq 0\\ \end{array}

    where q_1 and q_2 are the actual water intake volumes of water consumer A and water consumer B respectively. l_1 and l_2 are the wastewater discharge volumes of two users respectively. r_1 and r_2 are the water rights of the two users respectively. g_1 and g_2 are the emission rights of two users respectively. w is the ecological water requirement of the watershed. t represents the water resource fees. Applying the Karush-Kuhn-Tucker condition to the lower-level problem and using the CHKS smoothing function, the watershed water trading decision-making model is converted to the following SNLP problem,

    \begin{array}{ll} \underset{w,t,r_1,g_1,r_2,g_2}{max}& F = 0.4W + t(q_1+q_2)+f_1+f_2,\\ \; \; \; s.t.& r_1+r_2+w = 90,\\ \; \; \; &g_1+g_2 = 20,\\ &0.8-t+0.4(47-q_2)+0.01(r_2-q_2)-(0.9-0.01(r_1+r_2-q_1-q_2))-0.06(0.3q_2-l_2) = 0,\\ &0.2(0.3q_2-l_2)+0.01(g_2-l_2)-(0.8-0.01(g_1+g_2-l_1-l_2))-\lambda_1 = 0,\\ &\lambda_1-(l_1+l_2-20)-\sqrt{(\lambda_1+l_1+l_2-20)^2+4\tilde{\epsilon}^2} = 0,\\ &0.7-t+0.6(45-q_1)+0.01(r_1-q_1)-(0.9-0.01(r_1+r_2-q_1-q_2))-.08(0.2q_1-l_1)-0.01\lambda_2 = 0,\\ &0.4(0.2q_1-l_1)+0.01(g_1-l_1)-(0.8-0.01(g_1+g_2-l_1-l_2))-0.01\lambda_3\\ &+\lambda_4(-1-\frac{\lambda_1+l_1+l_2-20}{\sqrt{(\lambda_1+l_1+l_2-20)^2+4\tilde{\epsilon}^2}}) = 0,\\ \; \; \; & q_1+q_2+w \leq 90,\\ \; \; \; & g_1 \geq 7, g_2 \geq 8, r_1\geq 38, r_2\geq 42, w \geq 6, 0.3 \leq t \leq 2.0,\\ & q_1,\ l_1 \geq 0,\\ &q_2, \ l_2\geq 0. \end{array}

    Applying Algorithm (3.7) to the above nonlinear programming problem we have the following data at \tilde{\epsilon} = 0.001 . g_1 = 9 , g_2 = 10.450 , l_1 = 6.8755 , l_2 = 9.0751 , q_1 = 41.497 , q_2 = 43 , r_1 = 39.332 , r_2 = 44.269 , t = 0.3 , w = 6.2338 , f_1 = 12.17 , f_2 = 19.043 , and F = 59.055 .

    Table 3 compares the obtained results with method [25] and those of various existing methods(Ref). From Table 3, we can see that the solutions obtained via Algorithm (3.8) are better than those given in [25] and those of various existing methods(Ref). Therefore, Algorithm (3.8) can also be applied to the practical problem.

    Table 3.  Comparison of the results of Algorithm (3.8) with method [25] and with those of various existing methods(ref) for a watershed water application.
    Parameter Method in [25] Algorithm (3.8) Results in Ref.
    q_1 42 41.497 41.2016
    q_2 41.9680 43 42.5388
    l_1 6.9984 6.8755 6.4772
    l_2 9.1751 9.0751 9.1611
    r_1 39.9679 39.332 39.4861
    r_2 44 44.269 44.2542
    g_1 9 9 9.0015
    g_2 11 10.45 10.9985
    w 6.0321 6.2338 6.2596
    t 0.3000 0.3 0.3226
    F 58.97837 59.055 58.4482
    f_1 13.40286 12.17 10.964
    f_2 17.97226 19.043 17.964

     | Show Table
    DownLoad: CSV

    In this paper, Algorithm (3.8) has been presented to solve the NBLP problem (2.1). A Karush-Kuhn-Tucker condition is used with the CHKS smoothing function to convert the NBLP problem (2.1) into a standard SNLP problem. The SNLP problem is solved by using the proposed nonmonton active interior-point trust-region technique to obtain an approximately optimal solution to the NBLP problem. In the proposed nonmonton active interior-point trust-region technique, the smoothed nonlinear programming problem is transformed into an equality-constrained optimization problem with bounded variables by using an active-set approach and the penalty method. To solve the equality-constrained optimization problem with bounded variables, Newton's interior-point method is used. But it is a local method so it might not converge if the starting point is far from a stationary point. To overcome this problem, the nonmonotone trust-region strategy is used. It is generally promising and efficient, and it can overcome the aforementioned shortcomings.

    Applications to mathematical programs with equilibrium constraints are given to clarify the effectiveness of the proposed approach. Numerical results reflect the good behavior of the proposed technique and the computed results converge to the optimal solutions. It is clear from the comparison between the solutions obtained by using Algorithm (3.8) and the algorithm of [18], that Algorithm (3.8) is able to find the optimal solution of some problems with fewer iterations, fewer function evaluations, and less time.

    Furthermore, the usefulness of the CHKS smoothing function with the nonmonton active interior-point trust-region algorithm in efforts to solve NBLP problems was examined by using a real-world case about a watershed trading decision-making problem. Numerical experiments show that the suggested method surpasses rival algorithms in terms of efficacy.

    The authors declare that they have not used artificial intelligence tools in the creation of this article.

    The authors declare that they have no competing interests.



    [1] M. A. Amouzegar, A global optimization method for nonlinear bilevel programming problems, IEEE Trans. Syst. Men Cybernet., 29 (1999), 771–777. https://doi.org/10.1109/3477.809031 doi: 10.1109/3477.809031
    [2] R. Byrd, Omojokun, Robust trust-region methods for nonlinearly constrained optimization, In: Second SIAM Conference on Optimization, Houston, 1987.
    [3] J. F. Bard, Coordination of a multidivisional organization through two levels of management, Omega, 11 (1983), 457–468. https://doi.org/10.1016/0305-0483(83)90038-5 doi: 10.1016/0305-0483(83)90038-5
    [4] J. F. Bard, Convex two-level optimization, Math. Program., 40 (1988), 15–27. https://doi.org/10.1007/BF01580720 doi: 10.1007/BF01580720
    [5] B. Chen, P. T. Harker, A non-interior-point continuation method for linear complementarity problem, SIAM J. Matrix Anal. Appl., 14 (1993), 1168–1190. https://doi.org/10.1137/0614081 doi: 10.1137/0614081
    [6] I. Das, An interior point algorithm for the general nonlinear programming problem with trust region globlization, In: Technical Report, 1996.
    [7] J. Dennis, M. Heinkenschloss, L. Vicente, Trust-region interior-point SQP algorithms for a class of nonlinear programming problems, SIAM J. Control Optim., 36 (1998), 1750–1794. https://doi.org/10.1137/S036012995279031 doi: 10.1137/S036012995279031
    [8] J. Dennis, M. El-Alem, K. Williamson, A trust-region approach to nonlinear systems of equalities and inequalities, SIAM J. Optim., 9 (1999), 291–315. https://doi.org/10.1137/S1052623494276208 doi: 10.1137/S1052623494276208
    [9] N. Y. Deng, Y. Xiao, F. J. Zhou, Nonmonotonic trust region algorithm, J. Optim. Theory Appl., 76 (1993), 259–285. https://doi.org/10.1007/BF00939608 doi: 10.1007/BF00939608
    [10] B. El-Sobky, A multiplier active trust-region algorithm for solving general nonlinear programming problem, Appl. Math. Comput., 219 (2012), 928–946.
    [11] B. El-Sobky, An interior-point penalty active-set trust-region algorithm, J. Egypt. Math. Soc., 24 (2016), 672–680. https://doi.org/10.1016/j.joems.2016.04.003 doi: 10.1016/j.joems.2016.04.003
    [12] B. El-Sobky, An active-set interior-point trust-region algorithm, Pacific J. Optim., 14 (2018), 125–159.
    [13] B. El-Sobky, A. Abotahoun, An active-set algorithm and a trust-region approach in constrained minimax problem, Comp. Appl. Math., 37 (2018), 2605–2631. https://doi.org/10.1007/s40314-017-0468-3 doi: 10.1007/s40314-017-0468-3
    [14] B. El-Sobky, A. Abotahoun, A trust-region algorithm for solving mini-max problem, J. Comput. Math., 36 (2018), 881–902.
    [15] B. El-Sobky, Y. Abouel-Naga, A penalty method with trust-region mechanism for nonlinear bilevel optimization problem, J. Comput. Appl. Math., 340 (2018), 360–374. https://doi.org/10.1016/j.cam.2018.03.004 doi: 10.1016/j.cam.2018.03.004
    [16] B. El-Sobky, Y.Abo-Elnaga, A. Mousa, A. El-Shorbagy, Trust-region based penalty barrier algorithm for constrained nonlinear programming problems: an application of design of minimum cost canal sections, Mathematics, 9 (2021), 1551. https://doi.org/10.3390/math9131551 doi: 10.3390/math9131551
    [17] B. El-Sobky, G. Ashry, An interior-point trust-region algorithm to solve a nonlinear bilevel programming problem, AIMS Math., 7 (2022), 5534–5562. http://dx.doi.org/10.3934/math.2022307 doi: 10.3934/math.2022307
    [18] B. El-Sobky, G. Ashry, An Active-set Fischer-Burmeister trust-region algorithm to solve a nonlinear bilevel optimization problem, Fractal Fract., 6 (2022), 412. https://doi.org/10.3390/fractalfract6080412 doi: 10.3390/fractalfract6080412
    [19] B. El-Sobky, G. Ashry, Y. Abo-Elnaga, An active-set with barrier method and trust-region mechanism to solve a nonlinear bilevel programming problem, AIMS Math., 7 (2022), 16112–16146. http://dx.doi.org/10.3934/math.2022882 doi: 10.3934/math.2022882
    [20] B. El-Sobky, M. F. Zidan, A trust-region based an active-set interior-point algorithm for fuzzy continuous Static Games, AIMS Math., 8 (2023), 13706–13724. http://dx.doi.org/10.3934/math.2023696 doi: 10.3934/math.2023696
    [21] J. B. E. Etoa, Solving quadratic convex bilevel programming problems using a smoothing method, Appl. Math. Comput., 217 (2011), 6680–6690. https://doi.org/10.1016/j.amc.2011.01.066 doi: 10.1016/j.amc.2011.01.066
    [22] J. E. Falk, J. M. Liu, On bilevel programming, Part Ⅰ: general nonlinear cases, Math. Program., 70 (1995), 47–72. https://doi.org/10.1007/BF01585928 doi: 10.1007/BF01585928
    [23] H. Gumus, A. Flouda, Global optimization of nonlinear bilevel programming problems, J. Global Optim., 20 (2001), 1–31. https://doi.org/10.1023/A:1011268113791 doi: 10.1023/A:1011268113791
    [24] Y. Ishizuka, E. Aiyoshi, Double penalty method for bilevel optimization problems, Ann. Oper. Res., 34 (1992), 73–88. https://doi.org/10.1007/BF02098173 doi: 10.1007/BF02098173
    [25] Y. Jiang, X. Li, C. Huang, X. Wu, Application of particle swarm optimization based on CHKS smoothing function for solving nonlinear bilevel programming problem, Appl. Math. Comput., 219 (2013), 4332–4339. https://doi.org/10.1016/j.amc.2012.10.010 doi: 10.1016/j.amc.2012.10.010
    [26] C. Kanzow, Some noninterior continuation methods for linear complementarity problems, SIAM J. Matrix Anal. Appl., 17 (1996), 851–868. https://doi.org/10.1137/S0895479894273134 doi: 10.1137/S0895479894273134
    [27] S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, Optimization by simulated annealing, Science, 220 (1983), 671–680. https://doi.org/10.1126/science.220.4598.671 doi: 10.1126/science.220.4598.671
    [28] H. Li, Y. Jiao, L. Zhang, Orthogonal genetic algorithm for solving quadratic bilevel programming problems, J. Syst. Eng. Elect., 21 (2010), 763–770. https://doi.org/10.3969/j.issn.1004-4132.2010.05.008 doi: 10.3969/j.issn.1004-4132.2010.05.008
    [29] Y. B. Lv, T. S. Hu, G. M. Wang, Z. P. Wan, A penalty function method based on Kuhn-Tucker condition for solving linear bilevel programming, Appl. Math. Comput., 188 (2007) 808–813. https://doi.org/10.1016/j.amc.2006.10.045 doi: 10.1016/j.amc.2006.10.045
    [30] D. Muu, N. Quy, A global optimization method for solving convex quadratic bilevel programming problems, J. Global Optim., 26 (2003), 199–219. https://doi.org/10.1023/A:1023047900333 doi: 10.1023/A:1023047900333
    [31] J. Mo, C. Liu, S. Yan, A nonmonotone trust region method based on nonincreasing technique of weighted average of the successive function value, J. Comput. Appl. Math., 209 (2007), 97–108. https://doi.org/10.1016/j.cam.2006.10.070 doi: 10.1016/j.cam.2006.10.070
    [32] E. Omojokun, Trust-region strategies for optimization with nonlinear equality and inequality constraints, PhD thesis, Department of Computer Science, University of Colorado, Boulder, Colorado, 1989.
    [33] V. Oduguwa, R. Roy, Bi-level optimization using genetic algorithm, In: Proceedings 2002 IEEE International Conference on Artificial Intelligence Systems (ICAIS 2002), 2002,123–128. https://doi.org/10.1109/ICAIS.2002.1048121
    [34] T. Steihaug, The conjugate gradient method and trust-region in large scale optimization, SIAM J. Numer. Anal., 20 (1983), 0720042. https://doi.org/10.1137/0720042 doi: 10.1137/0720042
    [35] S. Smale, Algorithms for solving equations, In: Proceeding of International Congress of Mathematicians, American Mathematics Society, Rhode Island, 1987, 72–195.
    [36] G. Savard, J. Gauvin, The steepest descent direction for the nonlinear bilevel programming problem, Oper. Res. Lett., 15 (1994), 265–272. https://doi.org/10.1016/0167-6377(94)90086-8 doi: 10.1016/0167-6377(94)90086-8
    [37] Z. J. Shi, J. H. Guo, A new trust region methods for unconstrained optimization, J. Comput. Appl. Math., 213 (2008), 509–520. https://doi.org/10.1016/j.cam.2007.01.027 doi: 10.1016/j.cam.2007.01.027
    [38] P. L. Toint, Non-monotone trust-region algorithm for nonlinear optimization subject to convex constraints, Math. Program., 77 (1997), 69–94. https://doi.org/10.1007/BF02614518 doi: 10.1007/BF02614518
    [39] Y. Yuan, On the convergence of a new trust region algorithm, Numer. Math., 70 (1995), 515–539. https://doi.org/10.1007/s002110050133 doi: 10.1007/s002110050133
    [40] Y. Wang, Y. Jiao, H. Li, An evolutionary algorithm for solving nonlinear bilevel programming based on a new constraint-Handling scheme, IEEE T. Syst. Man Cy. C, 35 (2005), 221–232. https://doi.org/10.1109/TSMCC.2004.841908 doi: 10.1109/TSMCC.2004.841908
    [41] G. M. Wang, X. J. Wang, Z. P. Wan, Y. B. Lv, A globally convergent algorithm for a class of bilevel nonlinear programming problem, Appl. Math. Comput., 188 (2007), 166–172. https://doi.org/10.1016/j.amc.2006.09.130 doi: 10.1016/j.amc.2006.09.130
    [42] C. Y. Wu, Y. Z. Zhao, Watershed water trading decision-making model based on bilevel programming, Oper. Res. Manage. Sci., in Chinese, 20 (2011), 30–37.
    [43] X. S. Zhang, J. L. Zhang, L. Z. Liao, A nonmonotone adaptive trust region method and its convergence, Comput. Math. Appl., 45 (2003), 1469–1477. https://doi.org/10.1016/S0898-1221(03)00130-5 doi: 10.1016/S0898-1221(03)00130-5
    [44] H. C. Zhang, W. W. Hager, A nonmonotone line search technique for unconstrained optimization, SIAM J. Optim., 14 (2004), 1043–1056.
  • This article has been cited by:

    1. M. A. El‐Shorbagy, Trust region based chaotic search for solving multi‐objective optimization problems, 2024, 0266-4720, 10.1111/exsy.13705
    2. Bothina El-Sobky, Yousria Abo-Elnaga, Gehan Ashry, A nonmonotone trust region technique with active-set and interior-point methods to solve nonlinearly constrained optimization problems, 2025, 10, 2473-6988, 2509, 10.3934/math.2025117
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1350) PDF downloads(61) Cited by(2)

Figures and Tables

Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog